Abstract
With this article, I introduce the ‘algorithmic fix’ as a framework to analyze contemporary placemaking practices. I discuss how algorithmic practices of placemaking govern and control mobilities. I theorize such practices as the ‘algorithmic fix’, where location determination technologies, data practices, and machine learning algorithms are used together to ‘get a fix on’ our whereabouts with the aim of sorting and classifying both people and places. Through a case study of location intelligence, I demonstrate how these digital placemaking practices do not only control and prevent physical mobilities – they are designed to fix who we are and whom we may become with the aim of creating a predictable future. I focus on geo-profiling, geo-fencing, and predictive policing as three key aspects of location intelligence to present a discussion of how ‘algorithmic fix’ as a framework can provide valuable insights to analyzing contemporary placemaking practices.
Keywords
Introduction
In the last decade, the world has witnessed many crises of mobilities such as displacements and forced migrations, global warming, climate change, endemics, and pandemics. These crises of mobilities have surfaced how technologies of mass surveillance and data analytics are used to predict the movements of people, information, and objects and how such technologies of control are employed to overcome uncertainties caused by such crises. For example, the US Department of Homeland Security deployed the data analytics firm Palantir ‘to screen air travelers and keep tabs on immigrants’ (Waldman et al., 2018). Although Palantir’s data practices came under scrutiny in relation to human rights violations, in the United Kingdom, the Home Office awarded Palantir ‘to manage the data analytics and architecture of the UK’s new “border flow tool”’ after 31 December 2020 when the United Kingdom left the European Union (Pegg, 2020).
Another recent example of how data analytics are used to control crises of mobilities is Covid-19. Technology companies, including Amazon, Microsoft, Google, and Palantir, are offering their services to track and control the spread of Covid-19 (Gould et al., 2020). Palantir is using machine learning models for predicting the locations of Covid-19 outbreaks to determine ‘where to deploy medical staff and supplies’ (Chapman, 2020). However, such predictions are not only used to plan distribution of medical supplies. They are also used to check whether people comply with social distancing rules based on community mobility reports, profiles of places (and hence people) (Google, 2020). Therefore, the immediate response to any type of crises of mobilities seems to be found in quick predictive solutions (i.e. technological fix) whether they subjugate civil liberties and human rights or not.
The use of these predictive and preemptive tools to govern mobilities raises many questions about their societal implications. For example, what kinds of inferences can be made about people and places, and how? How do such inferences influence practices of placemaking? What kinds of decisions can be made based on those automatic inferences about people and places? How do such decisions influence physical and social mobilities? How do these decisions reconfigure our relationships to/within places? This article is an effort toward answering these questions and understanding how algorithmic practices of placemaking govern and control mobilities.
I identify the control of mobilities based on algorithms and automated decision-making as specific practices of the ‘algorithmic fix’. Algorithmic fix is the reworking of location determination technologies, data, and machine learning algorithms together to get a fix on our whereabouts with the aim of governing mobilities. It is designed to ‘fix’ who we are and whom we may become with the aim of creating ‘predictable futures’ instead of predicting the future by creating categories for people and places that are then used as physical and virtual boundaries to ‘fix’ people in those places. Consequently, algorithmic fix controls and prevents physical and social mobilities (especially of marginalized groups).
To develop the concept of algorithmic fix as a theoretical framework, I focus on a particular manifestation of the algorithmic fix, along the axes of mobility, placemaking, and location. I build on the existing studies on location intelligence and location analytics (e.g. Barreneche, 2012; Smith, 2019, 2020; Wilken, 2019a) and geodemographic profiling and segmentation (e.g. Burrows and Gane, 2006). I also build on the existing theories of mobility, placemaking, location, and spatial fix. I review these works to provide a theoretical and conceptual background to anchor my understanding of the algorithmic fix.
To introduce this concept, I focus on location intelligence as a case study. Location intelligence is an algorithmic practice of digital placemaking that predicts, monitors, controls, and prevents mobilities. The case study demonstrates how algorithmic practices of digital placemaking can be inherently predictive, preemptive, and discriminatory and how they ‘fix’ who we are and whom we may become with the aim of creating a predictable future. The case study presents three interrelated categories of location intelligence as a practice of digital placemaking. These are geo-profiling, predictive policing, and geo-fencing.
By analyzing location-based algorithmic practices of placemaking through the critical lens of algorithmic fix, this article provides three key contributions. First, I present a review of the works that analyze the relationships between mobilities, placemaking, and spatial fix. Second, I extend the conceptualization of spatial fix and the ‘fix’ as a metaphor to introduce the algorithmic fix as a theoretical framework. I discuss its key aspects along with its possible implementations in digital placemaking practices and governing mobilities. This provides a critical spatial lens through which we can understand and analyze contemporary debates on surveillance and datafication, algorithmic governance, and social inequalities. Furthermore, I demonstrate employing algorithmic fix as a critical lens to understand and analyze the problematic coming together of (im)mobilities, digital placemaking, and location awareness.
Mobility, placemaking, and location
Mobility systems are designed to enable frictionless movement of people, information, and objects as much as possible (Urry, 2007). However, no matter how systematically mobilities are organized, they instigate some level of uncertainty regarding the whereabouts of people, objects, and information. A widely discussed example of this in relation to placemaking is why and how mobile phone users share where they are while on the move (Özkul, 2015; Laurier, 2001). Technically, mobile phones rely on the location of mobile cell towers to send and receive signals, which renders it possible to track and locate any mobile phone connected to a network and hence their users. While these systems are used to save lives in emergency and rescue operations, they are also used to surveil and control lives like in the examples of police surveillance of protestors based on location data obtained from mobile phones (Rodríguez-Amat and Brantner, 2016).
As Gumpert and Drucker (2007) assert, this relationship between mobilities and uncertainty is paradoxical: ‘As we increase our ability to communicate to any place from anywhere at any time, we are subject to pinpoint location by ourselves or others as we move’ (p. 11). In other words, systems of mobilities evolve alongside technologies of real-time connectivity, location tracking, and machine learning to overcome this locational and contextual uncertainty and to make our social and spatial behaviors predictable. Yet, this contradictory and relational construction of systems of movement and systems of control reproduces and reinforces existing social inequalities and produces new ones in societies (Sheller, 2018). As a result, our experiences of mobilities and the places where those movements happen are somehow defined by where we are and when, and who we are and where we come from.
Spatial fix
Capitalism thrives on crises and fears uncertainty. This is why capitalist systems like to fix things for stability – for example, stable economies, financial markets, forecasted growths and returns on investment, and so on. Interestingly, as Thrift (2005) argues, uncertainty is also used as a resource by capitalist systems where it is acknowledged that ‘nothing can be permanently fixed’ (p. 2). That is why public and private technology companies are capitalizing on crises of mobilities and positioning them as causes of uncertainty and as things to be fixed.
Fix, as a metaphor, has been most commonly used in geographical scholarship, especially in geographical political economy. Within geographical political economy, some variations and uses of fix-thinking start with David Harvey’s spatial, temporal, and spatiotemporal fixes (Jessop, 2006). Harvey first used the term ‘spatial fix’ in his Antipode (1981) article to discuss the temporary solutions to the problem of capitalism’s crisis of overaccumulation. This includes geographical reorganization and expansion that are used to manage the crisis-ridden nature of capitalism. Hence, spatial fix can be understood as capitalism’s practice of placemaking.
As Graham and Anwar (2019) assert, Harvey’s spatial fix has been used in the literature both literally and metaphorically (p. 180). They explain that the literal meaning of fix has been used as ‘fixing capital in place in physical forms (factories or transportation infrastructure)’, whereas its metaphorical meaning is used as ‘in a solution to crises in capitalism through spatial reorganization of capital and specific strategies to address those crises’ (p. 180). Most recently, critical data studies scholars have built on Harvey’s conception of spatial fix in their analysis of how ‘big data serves as a “fix” for capitalism’s inherent tendencies toward overaccumulation, not through a spatial expansion outwards, but by a rendering smooth of the rough surfaces of individuals’ lives as they become knowable as commodified representations of self’ (Thatcher et al., 2016: 998, added emphasis). Similarly, in their article on digital spatial fix, Greene and Joseph (2015) critically reflect on networked digital practices and employ Harvey’s spatial fix to investigate capital accumulation through online expansion. Algorithmic fix builds on these understandings of spatial fix and blends it with the theorizations of mobilities, placemaking, and location as I discuss in the following section.
Algorithmic fix
Algorithmic fix is a disciplinary practice of digital placemaking. As a concept it encompasses both literal and metaphorical meanings of the term fix. In algorithmic fix, the literal meaning of spatial fix manifests itself in three forms: the technical (locational) fix, the material fixity of mobility systems, and the physical fixity of people. On a technical level, ‘fix’ refers to tracking and establishing the location of any mobile device – that is, getting a fix on someone’s whereabouts. Thus, the locational fix depends on systems of mobility. The second form of fix, materiality of mobility systems, refers to the infrastructure of mobility systems, which are not necessarily stationary like in a mobile phone mast, but which can also be mobile like in GPS satellites. These systems have social consequences. The technical and material fixity of mobility systems renders the third form of fixity possible – the physical fixity of people. This refers to the fixing of people in space and time by social sorting, segregation, and redlining that are also manifested in spatial boundary-forming territorial practices – that is, not allowing access to particular places or to resources based on where one comes from and when. This type of spatial fixity and power (re)produces territorial communities, which define who belongs and who does not belong to a particular group (Agnew, 1994: 61).
For territorial fixity to work, geographical space has to be quantified as latitude and longitude, codified, and then governed based on calculative practices (Crampton and Elden, 2006). These calculative practices are enforced socially through spatial segmentation and anticipatory governance. Consequently, they result in categorization of people based on location. This location-based social sorting makes bounded spaces homogenous and, hence, predictable. Since the aim of algorithmic fix is to create predictable futures, it is built on categorization of places according to their inferred commonalities and differences to create bounded homogenous places. This enables the governance of mobilities through spatial and social sorting based on those anticipated meanings of places. As a result, algorithmic fix defines what kinds of behaviors would be expected and/or be acceptable where and when, and most importantly, who would be allowed to inhabit those places by monitoring not only entry but also exit. By regulating mobilities through establishing such boundaries, algorithmic fix creates predictable futures.
In a similar way, the metaphorical meaning of fix demonstrates itself in two forms in algorithmic fix: The first refers to a technological solution to a socioeconomic and political problem – uncertainty caused by mobilities. Therefore, the solution becomes creating predictable social and spatial environments and interactions that are created and governed by algorithmic decision-making. This leads to the second form of metaphorical fixity, the fixing of identities, in the form of algorithmic geo-profiles which are inherently location- and risk-based and data-driven (cf. Cheney-Lippold, 2017).
Fixing of identities is a location-based practice. As Castells (2010) asserts, ‘[…] the social construction of identity always takes place in a context marked by power relationships’ (p. 7). Therefore, their meanings are contested and the ways in which they are experienced, performed, and represented are political. For Bauman (2004), identity can be understood in relation to belonging and being in motion, and it is always in flux. In Bauman’s own words, ‘We seek and construct identities while on the move […]’ (p. 26) (original emphasis). Similar to construction of our individual and collective identities, identities of places are also constructed socially through repetitive movements (Tuan, 1974). Thus, identities are understood as (co-)constructive of places with their social, spatial, and temporal foundations (Evans and Saker, 2017). However, with the prevalence of data practices in everyday life, ‘Data about who we are becomes more important than who we really are or who we may choose to be’ (original emphasis) (Cheney-Lippold, 2017: 25). In algorithmic fix, who we are does not only depend on data about who we are or whom we may become. Most importantly, it depends on where we were, where we are, and where we will be and when (Özkul, 2015; Halegoua, 2019; Polson, 2016). This results in a highly contested interpretation of identities of both people and places (Özkul, 2017; Frith, 2015; Schwartz and Halegoua, 2015). Consequently, as a practice of digital placemaking, algorithmic fix relies on location data to ‘fix’ not only identities of places but also identities of people.
Case study
In the following section, I examine location intelligence as a demonstration of algorithmic fix as a practice of digital placemaking. The examination focuses on three key aspects of location intelligence. These aspects are geo-profiling, geo-fencing, and predictive policing. I present how these three aspects of location intelligence work together to create predictable futures through providing a technological fix for the uncertainties caused by crises of mobilities – governing mobilities and people. For each aspect of location intelligence, I provide a brief description as well as examples from granted patents owned by leading technology companies which provide location data analytics services. Later in the ‘Discussion: Predictable futures’ section, I reflect on these descriptions and examples to present problematics that could be investigated through the framework of the algorithmic fix.
Location intelligence
Location intelligence is a technology of power and control (Crampton, 2003). And this is what makes location intelligence important in overcoming uncertainties caused by higher rates of mobilities: creating predictable futures. As broadly defined by one of the pioneers of location intelligence, Skyhook, ‘location intelligence is the collection of insights we can gather from the interaction between people and physical locations’ (Skyhook, 2018). Consequently, location intelligence relies on automated insights based on various practices of placemaking.
These automated insights inform many decisions in governance and business. For example, location intelligence is also defined as ‘a methodology that marries location data and business data together to help solve a variety of business problems’ (Skyhook, 2018). What are these business problems? According to Factual, ‘the leader of location data’, these business problems could include understanding real-world behavior of target customers better and reaching them with precision and accuracy (Factual, 2020). Unlike many data analytics that rely on online behavior, location provides insights to real-world behaviors and complements the ‘online’ behavior. This is because location data are unique to individuals. Hence, it plays an important role in product and service personalization.
Another location intelligence company, PlaceIQ, also offers the so-called solutions to similar business problems, with a specific focus on ‘connecting with and understanding audiences’ through location-based insights for marketing purposes’ (PlaceIQ, 2020). For example, according to a study conducted by the research firm 451 Research for PlaceIQ, ‘81% of the marketers rank location data as the first of the second most important element’ (PlaceIQ News, 2017). Location-based targeting has even been pronounced as the ‘holy grail’ for marketers (Dredge, 2011).
Location intelligence does not only work through tracking movements of people and creating potential profiles of them but also works to infer information about the places that they visit to create profiles for places. This is a perpetual cycle where the estimated identities of places that are automatically inferred from users’ data then come to define and profile people who inhabit or visit those places.
With these types of uses, location data can help businesses to profile consumers better through measuring foot traffic and gaining market and competitor insights (Barreneche, 2012; Barreneche and Wilken, 2015; Smith, 2019; Wilken, 2019b). However, it is important to note that location intelligence is not a new industry (Wilken, 2019a). Place has always been an important aspect of marketing and advertising to an extent that it is discussed as one of the 4Ps of marketing along with product, price, and promotion in theories of marketing. Additionally, starting with the ‘locational analysis’ that dates back to the 1960s (Haggett, 1965), businesses have been employing demographic and statistical methods to identify the best locations for their investments.
Although its current mainstream use refers to a form of business intelligence based on location (e.g. location-based targeting and advertising), location intelligence can be understood as geospatial intelligence (GEOINT). As a form of GEOINT, location intelligence has been increasingly used for monitoring and controlling mobilities especially in the contexts of smart cities and transportation, autonomous vehicles, and smart borders and national security to name a few. As Crampton (2003: 135) asserts, especially in the United States, the dominant security discourse is mainly geospatial. According to the US National Geospatial-Intelligence Agency (NGA), ‘Anyone who sails a U.S. ship, flies a U.S. aircraft, makes national policy decisions, fights wars, locates targets, responds to natural disasters, or even navigates with a cellphone relies on NGA’ (NGA, n.d., added emphasis). Thus, anyone with a mobile phone is potentially a node to be tracked and located in the geospatial intelligence network.
Consequently, location intelligence industry capitalizes on security and risk, where people and places are thought of as ‘resources that required management and protection’ (Crampton, 2003: 137) (original emphasis). Such a management and protection would require technological fixes where insights inferred from data start shaping the future. In the case of location intelligence, these algorithmic fixes produce predictable futures, where decisions are made, and actions are taken based on automatically generated geo-profiles of people and places.
Geo-profiling
Everyday mobile and even mostly mundane decisions made by humans, such as which route to take while driving, now include algorithms that are also used to create profiles of people and places based on our past behavior patterns and real-time predictions. These profiles of people are called geo(demographic)-profiles (Crampton, 2003). With its spatial and temporal insights, location data provide a meaningful context for profiling algorithms, drawing a detailed picture of who we are and whether our movements or the places we visit can be flagged as ‘behaviors of interest’. Similarly, profiles of places are also created based on ‘places of interest’, which are also known as points of interest (POIs) in location-based targeting and advertising. These geo-profiles include ‘locations and events as a function of time, probability and pattern analysis’ which are then ‘analyzed to detect aberrant or potentially aberrant behaviors, or what [they] refer to as “behaviors of interest,” or “behavior-based triggers”’ (Trueposition, 2014). However, it is important to note that geo-profiling is not a recent phenomenon. It has existed for many years before the commonplace use of mobile phones, especially as geodemographic classifications used in marketing and political campaigning, and their initial development and use dates back to early 1970s in the United Kingdom and the United States (Webber and Burrows, 2018).
Geodemographic classification (also called neighborhood classification) is ‘an analytic approach that has been developed by, and is most commonly used by, market researchers, business analysts in the commercial sector, political parties, the police and local government’ (Webber and Burrows, 2018: 21). As Phillips and Curry (2003: 138) asserted, ‘the premise of geodemographics is that one can profitably divide the landscape into discrete spaces occupied by homogeneous groups of households and individuals’. The key idea behind these classifications (despite various differences in different systems of classifications) is that ‘if a set of areas are similar to each other across all widely used measures of demographic structure, they are also likely to be very similar across almost any manifestation of social values, behavior and consumption’ (Webber and Burrows, 2018: 21–22). Hence, these sociodemographic segmentations also manifest themselves in the physical structures of space, and such segmentation or clustering feed back into the social and physical spaces. Today, these classifications are increasingly used in social sorting and redlining based on software, which opened up prospects for ‘automated spatiality’ (Burrows and Gane, 2006: 793).
In addition to geo-profiles, location intelligence systems are used to automatically generate psychographic profiles based on the analysis of mobility patterns of mobile phone users derived from periodic location fixes and accumulated location histories (PlaceIQ, 2019; Qualcomm, 2018). Users’ mobility patterns are then combined with various databases such as business classifications (e.g. Standard Industrial Classification in the United States) and POIs (e.g. home, work and school) and also correlated with market segmentation databases (e.g. Nielsen and Claritas PRIZM) to generate marketing ratings (Qualcomm, 2018). These ratings are used for both the places and the people who visit those places, which results in homogenization of space.
Such homogenization of space dictates how specific users are expected to behave in any given place at any time based on probabilities and statistics. In some cases, unsupervised machine learning models are deployed to predict ‘a likelihood of the respective consumer engaging in behavior associated with a corresponding one of the psychographic segment’, since labelling data sets for each segment is not widely available (PlaceIQ, 2019). These automatically generated and cross-referenced profiles are then used to fix various media content for intended audience based on location analytics (Smith, 2019, 2020). As the existing market segmentation databases work on the level of households, location intelligence becomes a key for differentiating between different users in any given household with finer detail and granularity (Qualcomm, 2018). This means that anyone living in the same household can be individually identified and profiled based on their movements both indoors and outdoors. To increase the degree of certainty of such behavioral predictions, users of mobile devices can be authenticated based on the location and motions of their devices (Alohar Mobile, 2014) and such authentication methods are especially employed in the form pattern analysis and prediction, especially to be used in security and control.
Geo-fencing
One of the most common uses of geo-profiles of both people and places is geo-fence. Geo-fence is defined as ‘a technical territoriality through which new geographic boundaries are drawn’ (Barreneche and Wilken, 2015: 506). Companies that provide geo-fencing services describe the practice as a ‘virtual perimeter on a map representing real-world geographic area’ (Life360, 2016). Geo-fences work as invisible fences whose presence becomes physically apparent only when a mobile device enters or exits a space that corresponds to a virtually defined territory. However, this does not mean that virtual territories always match the existing physical territories.
Geo-fences are most commonly used in marketing and advertising as well as in services that are marketed as providing the safety and security of children. However, the availability of this technology does not only drive commercial interests but also has the potential to categorize citizens, to govern and control societies where data-driven governance is becoming the norm (Barreneche, 2012). For example, location intelligence is deployed for border security, where a geo-fenced area monitoring can detect and differentiate between legal and illegal border crossings using risk metrics and clusters and prevent attempts of border crossings (Ericsson, 2013; Trueposition, 2016). In such use scenarios, ‘the algorithms are used, first, to identify regular patterns of movement and then flag movements that deviate from this identified norm’ (Kotef, 2015: vii). Hence, the movements that fall outside of the route within any predefined and geo-fenced area are flagged as well as entries and exits to that area. This is evident from the patents filed by some of the leading location intelligence companies like Skyhook that specifically target borders between countries where geo-fences are used to create alerts or alarms when mobile devices cross borders or pre-defined boundaries, either when they enter or when they exit as they ‘are used to put limits on the movement of tracked mobile devices’ (Life360, 2016). Such limitations can be determined based on calculated potential levels of threat to any given border. Political geographer Louise Amoore (2013) discusses such contested and disciplinary uses of location intelligence including the use of Radio Frequency Identification (RFID) chips in passports as part of ‘the politics of possibility’ that rely on ‘the emergence of knowledges of location’ (p. 108).
The data including the mobile identifiers and behaviors are collected in a database creating a historical trajectory, which can target individual or group of mobile devices as well as a specific (geo-fenced) area as defined by the wireless communication network service area. After collection, the data are then analyzed for ‘suspect behaviors and an index probability is assigned to each mobile’ (Trueposition, 2014). Those devices that are deemed to be ‘high risk’ are potentially flagged and are subject to additional scrutiny. Such additional scrutiny expands the disciplinary function of location intelligence. They are ‘centrifugal’ as Foucault (2007) asserted: Discipline concentrates, focuses, and encloses. The first action of discipline is in fact to circumscribe a space in which its power and mechanisms of its power will function […] In contrast, you can see that the apparatuses of security […] have the constant tendency to expand; they are centrifugal. (pp. 44–45).
Predictive policing
Law enforcement – that is, predictive policing – is one of the critical uses of location intelligence. Location intelligence comprises of place-based predictive policing (e.g. Palantir and PredPol), where information about crimes and types of crimes are mapped based on time of the day and location of the crimes, which may also include applications in immigration enforcement (Richardson et al., 2019: 200–205). Indeed, one can argue that the two pillars of predictive policing are location and time: ‘Predictive policing generally describes any system that analyses available data to predict either where a crime may occur in a given time window (place-based) or who will be involved in a crime as either victim or perpetrator (person-based)’ (Richardson et al., 2019: 21).
Where a crime has happened is important in determining crime hot spots and assessing the potential crime risk of any given area based on crime patterns (PredPol, 2017). Hence, based on past crime data, each patrol region is given a probability, which guides determining whether officers are in a ‘high probability crime region’ (PredPol, 2017). Hence, crime statistics and probabilities guide law enforcement officers’ perception of places based on calculated risk. Predictive policing is already a biased system due to differences in how crime data are collected and classified (Richardson et al., 2019: 25). Combined with place-based biases based on such calculations, racially biased uses and outcomes of these systems would multiply and manifest themselves in practices of unlawful stops, searches and arrests, and excessive use of force to name a few (Richardson et al., 2019: 25).
In location intelligence, machine learning is generally used to understand and model human behavior. This includes predictive models where the intentions with regard to specific types of movement as well as the reasons for any form of movement (Google, 2019; Microsoft, 2017). For example, depending on the context of where the data are collected, two mobile devices moving exactly the same way may trigger further surveillance because the algorithms detect such patterns of movement as anomalies (e.g. one person carrying two mobile devices). One of these examples covered in the patents filed by companies like Palantir relate to criminal intent, where ‘known associates’ are searched among criminal records if a pattern of movement is flagged as suspicious (Palantir, 2016; Trueposition, 2014).
Nevertheless, human behavior does not exist in a vacuum. Such modeling requires not only the analysis of human behavior itself but also the social and physical environment, which both define and also is defined by identifiable routines of movement in space-time. However, these identifiable routes of movement can be turned into places for deploying preemptive measures against any form of movement. For these kinds of predictions, mobile sensor data such as temperature, direction of travel, light, and sound are also collected and analyzed in addition to location data, where patterns of movements and physical context are used to infer who we are and what level of potential threat we may pose.
Such predictions are used in ‘identifying potential behaviors of interest, identifying specific mobile users associated with such behaviors of interest, associations between mobile device users and mobile device user identification when no public ID is available (such as with prepaid mobile devices)’ (Trueposition, 2014). These systems somehow fix meanings and behaviors. Hence, these predictions are disciplinary in nature, which aim to fix how one should behave in any given space and time.
Discussion: Predictable futures
Prediction in highly complex social systems is difficult. Instead of continuously predicting the future – for example, where one may travel to, where one may live, which route one may take, and so on, algorithmic governance aims to create predictable futures. In predictable futures, people, objects, and information could be easily identified and categorized. Each categorization corresponds to a set of predefined places as in the examples of geo-profiles of both people and places. These profiles not only fix the meaning of places but also prevent mobilities, where certain behaviors may be flagged as potential threats and certain places may be flagged as places that are associated with a higher risk of attracting specific groups. This shift from predicting the future to creating predictable futures may be understood through algorithmic fix as a conceptual and theoretical framework, which builds on and blends the existing theories of mobilities, placemaking, locative media, and spatial fix.
As one of the key premises of this article, algorithmic fix is a practice of placemaking that aims to create predictable futures, which suppress social and physical mobilities. In a world where we are witnessing multiple crises of mobilities sedentarist views of mobilities are unfortunately prevailing. From Trump’s infamous border wall and the rhetoric of ‘take back control’ in UK’s Brexit to Covid-19 pandemic which has led to national and global lockdowns and development of location tracking or mobile sensing apps, we are living in an era that is more and more marked by sedentarism that underlines the importance of roots and stasis (Sheller, 2018). Critically analyzing the sedentarist articulations of digital placemaking practices through the lens of algorithmic fix may also provide valuable insights into how location technologies and algorithmic prediction are employed by public and private companies and governments as a solution to overcome the uncertainty created by higher rates of mobilities.
The current political and economic conjuncture across the world puts an emphasis on statis, borders, roots, and territoriality. This emphasis is also created and governed by algorithmic practices like geo-fencing and geo-profiling which are increasingly used in predictive and preemptive practices like predictive policing. These practices have the potential to subjugate civil liberties. For example, when refugees who are seeking a safe new home have to rely on smartphones to map their own journeys which are later shared with other refugees as a ‘safer route’ to safety (Gillespie et al., 2016, 2018). However, mobile connectivity and infrastructure can also manipulate physical mobility whereby refugees follow and settle close to ‘islands of connectivity’, which are little pockets of mobility where they are allowed to move (Hounsell and Owuor, 2018: 30). As Taylor (2015) argues, this makes human mobility more legible. And this reflects the paradox of mobility where location technologies both enable and prevent mobilities.
Algorithmic fix can provide different insights especially in critically analyzing cases where data practices especially target vulnerable groups and marginalized groups. They are more open to such exploitation and their mobilities are more strictly governed. This reinforces existing inequalities on a global scale. Such a relational line of thinking is especially grounded in the politics of mobilities, where not only the experience of movement itself but also (and most importantly) ‘racial and class processes, gendered practices, and the social shaping of disabilities and sexualities’ are under investigation (Sheller, 2018: 2).
From locating fleets using GPS and monitoring production supply chains using RFID chips to tracking parolees with electronic ankle bracelets and monitoring individuals with depression using GPS and mobile phones, location technologies serve multiple uses and purposes. However, the discourse is moving away from the societal implications of location data to citizens’ safety, countries’ security, and overall well-being (e.g. predictive policing), since the power of knowing where things happen is invaluable especially in overcoming the uncertainties caused by mobilities.
This move ignores the fact that the very same location determination technologies used, for example, to identify a potential criminal activity or a perpetrator, can also be used to track and trace anyone with a smartphone. Nevertheless, this is not simply profiling for marketing and advertising purposes. Such processes lead into discriminatory practices, which Gandy (1993) famously conceptualized as the ‘panoptic sort’; ‘the all-seeing eye of the difference machine that guides global capitalist system’ (p. 1). As a discriminatory process, panoptic sort operates on both individual and population level and ‘is designed to identify, classify, evaluate, and assign individuals on the basis of remote, invisible, automatic, and comprehensive sensing of personhood’ (Gandy, 1993: 3). Hence, algorithmic practices of placemaking like location intelligence potentially categorize and identify some citizens – marginalized groups – as ‘de facto prisoners’ (Internet Society, 2017) and reinforce discriminatory practices
The hype around targeting consumers in real time with the most relevant advertising and better understanding who they are somehow undermines the fact that practices of digital placemaking can also serve many other purposes ranging from border security and defense to predictive policing. Most importantly, practices of digital placemaking are becoming more and more focused on a global mobile elite at the expense of fixity and statis of marginalized groups (Fast et al., 2018; Polson, 2016; Sheller, 2018). This fixity manifests itself in different aspects of placemaking and construction of identities, where especially marginalized groups like refugees are put in place-based identity categories which then define not only their experiences of physical movement but also their practices of placemaking such as establishing a new home as a safe and secure place.
Conclusion
Algorithmic fix operates at the intersection of relational (im)mobilities, machine learning, and location-awareness. Location-awareness plays a key role in the development of both commercial and government surveillance that are closely aligned with politics of (im)mobilities. This is not only being materialized in border control and security. It is also evident in plans and initiatives for smart cities (or, smart futures), where real-time citizen data complement the real-time tracking of goods and services, which may have implications on who may get access to certain places (as in gated communities) at the expense of which groups/populations. As Kitchin (2014) argues, ‘data have long been used to profile, segment and manage populations, but these processes have become much more sophisticated, fine-grained, widespread and routine’ (p. 176). However, with the proliferation of mobile and ubiquitous connectivity, and sensor data, these predictive profiling efforts have become more commonplace and reliant on joint private and public sector commercial interests and strategies. These profiles along with the information about our mobile habits then feed into various mobility systems, creating a perpetual cycle of accumulation of mobile sensor and user data, sorting and processing those data, and creating personal profiles that we are eventually driven to match. This fixes who we are and whom we may become, creating predictable futures where individual or collective identities and placemaking practices are homogenized, the behaviors that are seen as risky or not conforming are fixed by preventing any type of mobilities. Hence, the future of societies governed by algorithmic systems are somehow made predictable.
Footnotes
Acknowledgments
The author would like to thank Erika Polson, Germaine R Halegoua, Karin Wahl-Jorgensen, Mirco Musolesi, Richard Ek, and Selena Nemorin for their views and comments on an earlier draft of this article. The author would also like to thank the anonymous reviewers of this article for their valuable insights and comments and the editors for their support and guidance.
