Abstract
Zuboff's The Age of Surveillance Capitalism provides a powerful analysis of the emergence of surveillance capitalism as a particular type of informational capitalism. Many of the important impacts of this project of creating larger and more integrated systems of ‘behavioural surplus’ are captured powerfully by Zuboff; yet as different risk and organisational scholars such as Beck, Perrow, and Vaughan have argued, integrated systems often do not function as intended. While the imperfection of these systems may raise the possibility that surveillance capitalism may not be as bad as Zuboff suggests, there is also a way in which these systems not functioning as intended can make surveillance capitalism an even more dystopian possibility. In this vein, this paper asks: what are the consequences when the tools of a surveillance capitalist society break down? This paper argues that it is by thinking through Zuboff's framework that we can identify the systemic fragility of a surveillance capitalist society. This systemic fragility emerges through how surveillance capitalism generates imperatives towards the maximal collection of data for exploitation, which in turn generates a corresponding imperative to connect all aspects of life. Both of these imperatives, of collect and connect, in turn create an immensely fragile digital system, which has vast ramifications throughout social life, such that small imperfections and gaps in the system can magnify risk throughout society.
Keywords
Introduction
Over the last 20 years, a new, and increasingly dominant, business model of the internet has emerged. As powerfully charted by Zuboff (2019), this new internet business model provides services in return for the right to impose detailed surveillance on their users. In a novel twist on existing theories, Zuboff (2019) highlights that those on social media are neither the customers nor the product (see Smythe (1977) and Fuchs (2017)); rather, their actions are the raw materials that are collected, analysed, and then converted into services that are sold to other companies. Describing this new business model of informational capitalism as ‘surveillance capitalism’, Zuboff (2019) argues that the digitalisation of social spaces and the instrumentalisation of these spaces so as to ever more effectively create prediction services are a fundamental threat to society and to individual flourishing.
Zuboff's (2019) diagnosis of many of the discontents of the current digital economy is impressive. It makes an important contribution to an already existing literature, identifying how the current business model of the internet exploits and disfigures existing social relations (Crary, 2013; Pasquale, 2015; Couldry and Mejias, 2019). What Zuboff (2019) and these other analyses share is a picture of the world in which big data social media companies increasingly impose an order of rationalisation on our social and private lives. Undoubtedly, this is an important insight. To restate these claims in Habermasian (1984, 1987) terms, the current business model of the internet has led to an unprecedented invasion and instrumental rationalisation of the lifeworld based on (capitalistic) system imperatives. 1
Yet, while systems definitely do impose their ordering on social and material life (Habermas, 1984, 1987), and while the imposition of this highly instrumental and intrusive order poses an important threat to existing social and material life (Zuboff, 2019), these new systems are also subject to crises and breakdowns that can threaten social and material life (see Polanyi (1957), Habermas (1975), and Beck (1992)). As Beck (1992: 22, original emphasis) highlights, ‘Along with the growing capacity of technical options [Zweckrationalität] grows the incalculability of their consequences’. As this paper argues, by shifting more and more of social life to being mediated by surveillance capitalist digital systems, surveillance capitalism extends the potential for the increasing imposition of not only an instrumentally rationalised ordered society but also a more intensely disordered and chaotic society when these increasingly powerful systems of control break down. In this vein, this paper examines how the extension of surveillance capitalism creates both a highly vulnerable social and material infrastructure and an intensifying social, political, and economic dependence on this infrastructure. Both of these processes massively increase the systemic social risk emerging from these digital networks.
How the increasing imposition of order can in turn create greater risks of systematic disorder is instantiated well in how surveillance capitalism's attempts to impose control create greater risks of breakdown due to the specific underlying preconditions of a surveillance capitalist economy. The core to surveillance capitalism is the competition of digital platforms to maximise ‘behavioural surplus’. The result is an imperative to collect data from human behaviour (Zuboff, 2019). This imperative to collect in turns entails an imperative to connect, which in turn creates ever-greater connections and dependencies at the human–digital network interface. 2 As discussed below, when these digital networks of connection and collection function as intended by surveillance capitalists, they impose a massive level of instrumental rationalisation over human behaviour. Yet, the unintended yet foreseeable effect of this level of control (see Giddens (1976)) is that when these increasingly powerful networks break down, they can create cascading, systemic risk. As such, alongside the growth in the scope and scale of surveillance capitalist imperatives of collecting behavioural surplus, there is growth in the risk of a corresponding disruption to the social and material functions depending on these networks. Consequently, this paper provides an additional basis to critique surveillance capitalism, in addition to Zuboff's (2019) instrumentalist ordering of life critique – which is that surveillance capitalism can create greater disorder and disruption to human functioning in other ways than Zuboff identified.
One advantage of this critique is that it builds on, in a creative way, one of the criticisms of Zuboff's (2019) book, which is that the gap between surveillance capitalists’ intentions and realisations may be greater than Zuboff allows (see Kapczynski, 2020). In this way, this paper highlights the gap between the attempts of surveillance capitalists to control social and economic life and their ability to do so. Analysing the implications of this gap can further support the claim that the imperative to collect and connect tends to generate systematically important but hugely fragile digitalised infrastructures of everyday life that can threaten the functioning of social life.
This paper proceeds in three steps. First, it briefly outlines Zuboff's (2019) account of surveillance capitalism and the risks it does and does not focus on. Second, the paper outlines how the search for behavioural surplus generates both the imperative to collect and the imperative to connect. Last, the paper highlights how the imperative to collect and connect generates systemic digital risk.
Surveillance capitalism: The risks it focuses and the risks it does not
Zuboff (2015, 2019) has identified a new stage in ‘informational capitalism’ as ‘surveillance capitalism’. In surveillance capitalism, users of internet services from oligopolistic data companies become the ‘raw material’ for the development of services that predict the future behaviour of individuals. A key to the shift in the dominant internet business model is a transition towards the generation of profit through the maximisation of ‘behavioural surplus’ (Zuboff 2019: 129–131). The rise of ‘behavioural surplus’ signifies a shift from collecting data on users to improve services to collecting data on users so as to provide services that classify and predict the behaviour of their users. Zuboff (2019: 74–77) specifically identifies Google's pivot in 2000 towards funding its search service through selling services to advertisers based on the information Google collected on users as a key, early exemplar of surveillance capitalism. For Zuboff (2019: 74, 162–163), Google, Facebook, and increasingly Microsoft have become three paradigmatic cases of surveillance capitalist companies. Rather than the previous model of selling hardware or software for use, they have all come to exemplify a model in which they provide users services in return for appropriating their data.
Employing the metaphors of ‘dispossession’ and ‘rendition’, Zuboff (2019: 138–139, 233–234) develops a powerful critique of this process of data extraction from users. Not only do these digital environments violate our privacy – they are also increasingly designed to maximise the extraction of data that can be used to predict behaviour and to force us into social and economic situations in which our behaviour is more easily and reliably predicted. The growing inescapability of the necessity of using surveillance capitalist-based services so as to function in social and material life leads to the dominance of ‘instrumentarian power’ (Zuboff 2019: 434). For Zuboff (2019), instrumentarian power is the increasing dominance of the instrumental rationalisation of social life for the purposes of data extraction. Ultimately, for Zuboff (2019), these processes generate threats to social life, to private life, and to public and democratic life. 3 There however have been several important critiques that question the ability of the surveillance capitalism framework to illuminate contemporary capitalism. Before proceeding to build on and further amplify the side effects of the pursuit of surveillance capitalism, these need to be addressed.
Addressing critiques of Zuboff's (2019) theorisation of surveillance capitalism
Zuboff's (2019) The Age of Surveillance Capitalism has gained widespread attention, with a massive number of reviews across the humanities, social sciences, business studies, and legal scholarship (Jansen and Pooley, 2021). While there have been a multitude of different responses to the work, as well as much appreciation, there are a few key critiques that have been made of Zuboff's (2019) work. To properly pursue the intention of this paper – to creatively build on the surveillance capitalism framework to identify key risks not explicitly addressed in the existing literature – it is necessary to evaluate these critiques and the extent to which they affect the adequacy of the framework.
The first of these critiques is that Zuboff (2019) neglects the problems with other forms of digital capitalism, such as the monopoly pricing power and labour exploitation power of Apple (Morozov, 2019; Breckenridge, 2020: 934; Kapczynski, 2020: 1474–1475). A second set of critiques revolves around the claim that Zuboff's (2019) critique of surveillance capitalism involves overstatement in terms of its impact, such as ‘loss of the right to a future tense’, and that data is ‘dispossessed’ (Morozov, 2019; Cuellar and Huq, 2020). A third set of critiques is that Zuboff (2019) potentially overstates the ability of surveillance capitalist corporations to predict and control individuals (Morozov, 2019; Breckenridge, 2020: 930; Kapadia, 2020: 342; Kapczynski, 2020: 1473–1474; Jansen and Pooley, 2021: 2845). Lastly, there have been suggestions that Zuboff (2019) has overstated the importance of surveillance capitalism to contemporary capitalism as a whole (Kapczynski, 2020: 1472–1473).
In terms of addressing the first critique, it appears legitimate to concede that there are many fundamental problems with digital capitalism outside of the appropriation of behavioural surplus. In particular, the practices of labour exploitation of Apple, the way Apple uses its monopoly position to accumulate massive profits (Fuchs, 2017), and how it instigates social practices that intensify social exclusion for those who are unable to acquire their products (McGee, 2023) raise doubts about the valorisation of Apple as a suitable emancipatory alternative. Nevertheless, despite raising important questions about Zuboff's (2019) analysis of alternative dimensions of digital capitalism, this in itself does not undermine the analytical value of her critique of surveillance capitalism.
In responding to the second critique, there are questions that can be raised about whether all of the key terms of the critique have been sufficiently evidenced. Nevertheless, even if we are unsure whether one's data is ‘dispossessed’ – without a more extensive argument for the rights to control our data than the book includes – however, the book still powerfully outlines a new business model. In this way, Zuboff (2019) brings together a series of different processes together into a powerful framework (Cohen, 2019: 240; Cuellar and Huq, 2020: 1284). As such, the key insights revolving around the emergence of a new business model and its new mode of extraction (Cohen, 2019: 240) capture important social and material processes. In particular, irrespective of whether the process fully involves ‘dispossession’, it is clear that the core to the development of surveillance capitalism is the massive extension of the data ‘extraction architecture’ (Zuboff, 2019: 129–132). Likewise, as discussed further below through the dual imperatives of collect and connect, surveillance capitalism does not need to be the only mode of capitalist growth for the exponential expansion of the extraction architecture to have massive risk implications for society.
In terms of the third critique, there are important questions regarding whether we can equate the surveillance companies’ existing plans to predict and control with their actual power to do so (Kapczynski, 2020: 1473–1474). Nevertheless, while this question is important for the full extent of the ability of instrumentarian power to control behaviour, it is clear that Zuboff (2019) has not overstated the ever-growing reach of the extraction architecture and the systemic imperatives behind this drive. That is, even if surveillance capitalism companies may not have the power to predict and control society as thoroughly as Zuboff (2019) fears (though as Kapczynski (2020: 1473–1474) acknowledges, we should not simply dismiss this risk), their ability to increasingly mediate between every aspect of social practices still raises fundamental risk questions that need further investigation.
The last critique raises important questions in terms of what is the scope of surveillance capitalism. That is, the challenge may be raised that if it is just a few social media companies engaging in this business model, then it is hard to view this as the dominant feature of capitalism. Firstly, it should be conceded that traditional industries producing natural resources still occupy a key role in contemporary capitalism. Nevertheless, it can be argued that digital capitalism is becoming a hegemonic model (Srnicek, 2017: 5) and that this is clearly manifested in the increasing ‘smartness mandate’ across all of society (Halpern and Mitchell, 2022). Moreover, within the broader rubric of digital capitalism, the quest to extract and control immense amounts of data is the primary business model (Srnicek, 2017: 6). Moreover, as Zuboff (2019) has highlighted, even companies that are not traditional digital companies, such as insurance, healthcare, finance, transportation, and retail, are shifting towards data extraction as one of the key aspects of their business activity. While there are some companies that are almost solely surveillance capitalist companies, such as Google and Facebook, and others that are shifting increasingly toward surveillance capitalism while also retaining elements of other business models, such as Microsoft, for many others, it is nevertheless an increasingly important aspect of their business. Insofar as data extraction can become an additional means of revenue as well as boosting stock market values, then it can work as an imperative for corporations in capitalism in the way that any other strategy for maximising profits does. Moreover, it can be argued that this imperative is even more exigent in the case of data collection, as the combination of high fixed costs and low marginal costs and the first mover benefits due to network effects create massive pressure to either dominate as a monopolist or be competed out of the sector (Srnicek, 2017; Kapczynski, 2020: 1477). Moreover, it should also be noted that surveillance capitalism does not need to overtake all other business models in society to create fundamental risks. Insofar as the digital extraction architecture increasingly serves as the necessary condition of other goods of life, then it can have even much greater social consequences than its even sizeable economic footprint.
As such, despite some of the important points of potential critique that have been identified in Zuboff's (2019) analysis, The Age of Surveillance Capitalism is an impressive treatise, which highlights the risks of a new and highly unequal instrumental rationalisation and exploitation of social life imposed by existing data companies (see also Crary (2013), Pasquale (2015), and Couldry and Mejias (2019)). As mentioned above, what Zuboff (2019) and these other analyses share is an account of the world in which big data social media companies increasingly impose an order of rationalisation on our social and private lives. Undoubtedly, this is a key insight. The risks that these systems pose when they work as they are designed – to understand, predict, and control the behaviour of their users and the private, social, and public worlds they occupy – are massive. Nevertheless, there are also massive risks when the infrastructures of social, political, and economic life developed to fold life within surveillance capitalist imperatives of behavioural surplus extraction do not function as intended. Insofar as the search for behavioural surplus leads to the ever more intimate connection and dependence of life on these digital networks used to collect and analyse data, then surveillance capitalism also poses a powerful threat to our autonomy and ability to function when these systems malfunction. 4
Behavioural surplus and the imperative to collect and connect
The core to the business model of surveillance capitalism is the appropriation of behavioural surplus. This involves learning increasingly more about individuals’ actions so as to be able to predict and potentially control future behaviour. Yet, it involves much more than simply collecting data from existing activities. To collect this data, digitally networked companies increasingly format interactions and social practices to maximise data collection. This involves formatting environments of interaction so as to create situations in which action is more predictable while also generating as many data points as possible from behaviour in these environments so as to better predict future action (Zuboff, 2019). Secondly, this focus on extracting the maximum value from behavioural surplus involves the formatting of action environments so that there are more significant points of profitable interventions. This is done so that the increasing power to predict behaviour in these highly formatted environments can be transformed into revenue (Zuboff, 2019).
Achieving the maximisation of behavioural surplus based on the principles of understand, predict, and control generates two key infrastructural imperatives with direct relevance to the systemic fragility generated by the pursuit of surveillance capitalism. The first is the imperative to maximise the collection of data by social media companies as the basis of developing the ability to understand, predict, and control behaviour (Zuboff, 2019). Secondly, to achieve this goal of collection, as well as to maximise the use of the data collected, a second imperative emerges, the imperative to connect, that is, to connect as many users and as many of their social practices as possible to digital networks.
This imperative to connect – flowing from the imperative to collect – works in several key ways. First and foremost, it involves an imperative to connect as many users and as many of their activities as possible to digital networks. As has been highlighted in terms of big data, quantity is fundamental, but it is not the only key – the diversity of data points and the velocity of the data are also particularly important (Mayer-Schönberger and Cukier, 2013: 199). It should be noted that Kitchin (2014) has also highlighted other key dimensions of big data, though velocity, volume, and variety continue to be fundamental characteristics in addition to these others (including resolution, relational, and exhaustiveness). These other dimensions also strengthen the imperative to collect and connect, though for the sake of exposition, they have not been each individually analysed for their contribution.
This process of making more and more of social and material life susceptible to the collection of data so as to maximise the effectiveness of big data and the predictive algorithms built on them has been thus described as ‘datafication’ (Mayer-Schönberger and Cukier, 2013; van Dijck, 2014). As such, datafication entails not only the intensive collection of data but also the intensified interconnectedness between all of these different points of data collection. The result of this imperative to collect is thus ‘an ecosystem of connectivity where all online platforms are inevitably interconnected, both on the level of infrastructure as on the level of operational logic’ (van Dijck, 2014: 204).
As should be emphasised though, this imperative to collect is not simply an individual idiosyncratic preference of some corporations, but rather it is built into the logic of surveillance capitalism. The imperative to collect both enables firms to develop and implement maximally effective prediction products and to improve their services so as to further develop their bases of monopoly. 5 As the example of Google versus its competitors shows, surveillance capitalist firms are able to build on small initial advantages to generate the ‘virtuous cycle’ of more data collected, thus better services, thus more users, thus more data collected, which supports further monopolisation of markets (Srnicek, 2017: 96).
This imperative to collect is thus an imperative for firms surviving and thriving in surveillance capitalism. It has thus generated an imperative to connect to gain as large and as fine-grained data sets as possible in a variety of domains of social life. This imperative to connect, imposed on individuals, as individuals, consumers, participants in political deliberation, and as workers, can be seen across society. 6 As Zuboff (2019) highlights, it is not only social media use by private individuals where the imperative to connect functions. In fact, it is in our activities as workers that many of the most extreme cases of invasive data collection and connection occur so as to both enable employers to control their workers and to further develop these technologies. 7 As Zuboff (2019) highlights well, the social imperative for connection via social media use has become a key prerequisite to achieve ‘effective life’ (Zuboff, 2019: 53; see also Vaidhyanathan (2018) and Wu (2018)). As the costs of attempting to avoid social media continue to grow, the intensity and extension of connection of our life with social media networks grow.
The internet of things (IoT), both for consumer goods and the industrial internet of things (IIoT) and associated use of cloud robotics (see Greengard (2015); Schneier (2015, 2018); and Couldry and Mejias (2019)), likewise presages a massive intensification of data collection and connection. In the case of consumer IoT, while some product rollout is driven by consumer convenience, many others appear to be driven primarily by companies that are seeking to collect ever more detailed information on consumer practices. For at least some products, such as TVs and automobiles, it is now almost impossible to find new products that are not also digitally connected and data collection devices. 8 Despite its already impressive reach, the extension of IoT for private consumer goods still has much further to go. It is also increasingly embedded in environments where it is not even nominally voluntary, such as work environments, but also urban environments, in ‘smart cities’ (see Townshend (2013) and Albino, Umberto, and Dangelico (2015)). In these environments, the aim is to increasingly pursue real-time governance based on big data-trained predictive algorithms, which are then used to optimise urban governance systems. Again, most of the systems that are developed and used are developed and owned by private corporations (for a recent example, see Curran and Smart (2021)).
Ever more intensive data collection and various attempts to impose financially beneficial control over the choice situations of individuals in surveillance capitalism lead to a massively interconnected network that ineluctably collects data and then feeds them into proprietary algorithms. As such, the imperative to collect data and the imperative to connect result in immense interdependencies with digital networks. The following section discusses the logic of the imperative to connect and how this relates to the potential systemic risks that can emerge from these digital networks.
Imperative to collect and connect and the growing vulnerability of digitally interconnected networks
There are three primary principles of cybersecurity: confidentiality, availability, and integrity (Schneier, 2018). Violations of confidentiality include cases in which hackers are able to gain illegitimate access to confidential data (see the Sony, Target, Equifax, and Capital One hacks), as well as when systems are not secured and encrypted so that confidential data are unintentionally left on the internet open to be accessed by others (see Facebook (ENISA, 2019: 66)). Availability is the characteristic of a system in which its functionalities are available to authorised parties when needed (Pfleeger, Pfleeger, and Margulies, 2015). Integrity involves only authorised parties making changes to a digital system. Integrity violations can involve modifications of an existing data set, as well as modifications to the functioning of software (Schneier, 2018). 9
As argued above, surveillance capitalism and its imperative to collect likewise generate an imperative to connect. These twin imperatives tend to massively increase the systemic risk emerging from digitally interconnected networks. This intensification of risk occurs in two key ways. First, it increases the vulnerability of the digital networks. Second, insofar as these digital networks grow in importance and replace other modes of social provisioning, 10 then society is more dependent on and hence vulnerable to any digital network failures. Processes associated with surveillance capitalism, including the increasing dominance of social media in social life, the growing reach of IoT and ‘smart cities’, and the more general networked digitalisation of life, consequently mean that breakdowns in digital systems pose a key threat to ‘effective life’.
The argument that the imperative to collect and connect intensifies systemic risk is pursued below through identifying key factors intensified by these dual imperatives that also intensify systemic risk. However, before addressing these key factors, a brief discussion of systemic risk and how it can inform the analysis of cybersecurity is needed.
Systemic risk
Systemic risk is risk that emerges not just from the risks of specific units or components (such as a single computer) in a system but rather from the interactions of these different units, which are opaque if each unit is viewed in an isolated manner. 11 The core to systemic risk is the conception of contagion and how a malfunction in one part can cascade and cause ripples of risk throughout the entire network (de Bandt and Hartmann, 2000; Roukny et al., 2013). Building on studies of risk emerging from finance, the literature on systemic risk seeks to capture the risks of highly interconnected systems (Haldane, 2009; Vespignani, 2010; Haldane and May, 2011; Goldin and Mariathasan, 2014; Centeno et al., 2015). In particular, large, interconnected networks tend to create systemic risks in two key ways. First, they can intensify the risk that a failure of one part of a network can cause large parts of the rest of the network to fail. Second, the growth in size and interconnection of the network with other parts of social life increases the potential societal dependence on the network – so that if the network fails in some way, then it can potentially threaten key social functions.
The following section builds on literatures on factors identified as intensifying systemic risk (Perrow, 2007; Haldane, 2009; Vespignani, 2010; Haldane and May, 2011; Goldin and Mariathasan, 2014) and on the key principles of cybersecurity and insecurity (Schneier, 2003, 2015, 2018; Nunan and Domenico, 2017; Clarke and Knake, 2019; Schiller et al., 2022). It shows how the dual imperatives to collect and connect increase the risk of digital networks experiencing systemic failure due to complexity, target richness, and the weakest link feature of networks, as well as how the increasing size and importance of the network intensify the social dependence, and hence vulnerability, to these cascading malfunctions of these networks (eggs in one basket). 12
Complexity
As has been emphasised in both the systemic risk (Haldane, 2009) and the cybersecurity literature (Schneier, 2018), increased complexity tends to amplify potential cascading risk. The surveillance capitalist imperatives of collection and connection increase the number of nodes in the networks that collect data while also massively increasing the size as well as variety and velocity of the data collection practices. These all tend to increase the complexity of the digital network. This amplification of size and complexity increases the confidentiality, availability, and integrity risks associated with these digital networks in a series of key ways. Firstly, the amplification of the number and variety of ways that data are collected and the size and diversity of the data sets increases the complexity of the digital network collecting and storing the data sets. As has been widely emphasised, such amplification of complexity significantly increases the potential for security failures (see Nunan and Domenico (2017); Schneier (2015, 2018); and Curran (2018, 2020)). 13 As the complexity of the digital networks collecting and storing these data increases, it likewise increases the likelihood that security failures will occur. Complexity increases the likelihood that coding mistakes will occur that can be exploited by those seeking unauthorised access (Perrin, 2010). Complexity also intensifies the potential for gaps in protection due to unanticipated interactions between different components of a system, which tend to multiply with complexity (Schneier, 2015; see also Perrow (1984) and Zurich and Atlantic Council (2014)).
The massive spread of networks associated with the ethos of collect and connect exponentially increases the ‘attack surface’ of digital networks, thus making it much more difficult to effectively defend the network. Highlighting the importance of this development, a recent leading cybersecurity report identified as its number one trend in cybersecurity that the ‘attack surface in cybersecurity continues to expand as we are entering a new phase of the digital transformation’ (ENISA, 2020: 10). This increase in the attack surface in turn intensifies vulnerability to security failures of our increasingly digitally networked infrastructure of everyday life (ENISA, 2020; see also Schneier (2018)). The size and vulnerability of the attack surface are in particular magnified as surveillance capitalist imperatives become intertwined with IoT, smart cities, and the general networked digitalisation of life (see Greengard (2015), Schneier (2018), and Curran (2020)).
Complexity also increases the number of points of interface between humans and computers (see Greenfield (2017)), which makes computing systems much more vulnerable to ‘human engineering’, which is the accessing of systems through mistakes of individuals rather than through coding mistakes, such as zero-day vulnerabilities. 14 The collect, as much as possible, ethos of surveillance capitalist firms creates situations of complexity in which even when no one is actively seeking to access these data sets, they may be unintentionally made available for access on the web, as has happened with both Facebook and Twitter (ENISA, 2019: 66). In the case of Facebook, a weakness in the ‘Search’ capability led to the platform exposing approximately 2 million persons' data, while for Twitter, a glitch in the password handling procedure led to the possibility of all users’ passwords being accessible (about 330 million) in plain text in an internal log (Al-Heeti and Ng, 2018; ENISA, 2019: 66). In this way, confidentiality, availability, and integrity risks are all intensified by the complexity amplification associated with the dual imperatives of surveillance capitalism.
Target richness
The increasing size, variety, and intimacy of the data that are collected by surveillance capitalist firms generate an increasingly target rich environment for attackers (see Perrow (2007) and Schneier (2018)). As Perrow (2007) emphasised, even before the rise of the focus on big data, the excessive concentration of valuable resources, including data, in one place generates risks because it both elicits greater attacks in that direction and increases the damages from attacks when they are successful. Akin to the notion that people rob banks because that is where the money is, the growing size, variety, intimacy, and value of the data from surveillance capitalism's imperatives of collection and connection generate massive targets for attackers (see Financial Times (2022)). The size and importance of recent hacks support this point, with the hack of over 140 million people's confidential data in the case of Equifax, 100 million for Capital One, and 40 million for Target – such large concentrated pools of confidential data are the perfect target for those seeking to gain money or to achieve other strategic ends through accessing this information (see McMillan (2019)).
Larger networks provide ‘economies of scale’ for returns for attackers. This is particularly true for data breaches. Despite the large hacks of Capital One, Equifax, and Target in the 2010s, large-scale breaches continue to occur. In August 2021, a sophisticated cyberattack led to the theft of 7.8 million records of current T-Mobile customers, as well as 40 million records of former or prospective customers (ENISA, 2022: 105). In this case, simply being a prospective customer was sufficient to make one vulnerable to potential data theft. Likewise, economies of scale also function in terms of illegitimately gaining control of internet-connected devices that can be used for other illegitimate purposes. From 2021 to 2022, malware attacks targeting IoT nearly doubled (ENISA, 2022: 50). In cases where the integrity of these IoT devices is breached, hackers can use them to form a botnet that can be used as a means of attacking other systems. These IoT botnets can be used to threaten the availability of key servers via a distributed denial of service (DDoS); to spam others; to ‘brick’ the IoT device so that it no longer works properly; to steal sensitive information; and to mine for cryptocurrencies; or the IoT devices can even be used to cause physical harm when devices are used maliciously in some contexts (Schiller et al., 2022). In all of these cases, the greater the size of the data collected and the greater the extent to which these data collection systems are imbricated in the basic functions of our life, the greater the target richness there is to be gained by illegitimate access. In this way, the dual imperatives of collect and connect contribute towards ever-greater interconnection and hence to larger and more valuable targets for hacks.
Weakest link
The combination of the amplification of complexity, dynamism, and size emerging from the dual imperatives of collect and connect also increases the threat of what might be called the ‘weakest link’ principle of networks (see Haldane (2009), Schneier (2018), and Schiller et al. (2022)). As has been previously emphasised, from a security perspective, one is not as strongest as one's strongest link – but rather as strong as one's weakest link (Schneier, 2003: 103–106). Attackers do not have to achieve unauthorised access to all of one's points but only one's weakest (Schneier, 2018). Undoubtedly, defence in depth is an important principle of cybersecurity (Schneier, 2003); nevertheless, additional layers of security still leave what might be called a ‘weakest path’ of defence, which is most vulnerable to attack, even if more than one failure is necessary. The greater the size and complexity of digital networks, the greater the number of links and paths to unauthorised access. As such, this tends to increase the number of weaker links and paths. In this way, the increasing size and complexity of digital networks increase the number of parts of a system that must function for the whole system to function properly. This vastly increases the number of necessary conditions for the whole network to successfully function. Size and complexity interact in many ways to intensify vulnerability, as increasing size leads to the greater likelihood that some of the parts of the network will not function as intended, while enhanced size and complexity together also make it increasingly difficult for network protectors to diagnose where these vulnerabilities actually are. In fact, as Haldane (2009: 16) has emphasised in a different context, the complexity and uncertainty associated with heightened interconnectedness can lead to a situation in which ‘spotting the weakest link became impossible’, thus making existing networks even more fragile.
An illustrative example likewise can help clarify the role of this principle. The paradigmatic surveillance capitalist company, Equifax, is a large company that trades in the personal data of individuals. In September 2017, Equifax acknowledged that months-long access to its credit report databases by hackers had led to the breach of personally identifiable information of over 143 million people (Fleishman, 2018). In total, 148 million people's confidential information was accessed. Despite the vast amount of data accessed, it was the failure of a single Equifax employee that had left one internet-facing web server with out-of-date software that led to the successful hack (Fleishman, 2018). It took Equifax 76 days to realise it had even been hacked (U.S. House of Representatives Committee on Oversight and Government Reform, 2018). The prodigious complexity and size of its database combined with the ‘weakest link’ principle enabled such a large hack of personal information. While undoubtedly, the single employee did make a mistake, the primary failure here is that of designing systems that are so important and fragile that single mistakes can lead to catastrophic consequences (Perrow, 1984, 2007; see also Vaughan (1999)).
Eggs in one basket
The size, variety, and intimacy of data collected not only make cyber breaches and breakdowns more likely; they also significantly increase the damage they cause. In what might be called the ‘eggs in one basket’ principle, the creation of ever larger data sets and the networks of collection and connection that they rely upon not only increase the likelihood of errors – they also increase the likelihood of these errors generating catastrophic damage across society (see also Perrow (1984, 2007) and Schneier (2018)). Insofar as digital systems replace other means of social provisioning and reproduction, then when they do malfunction or fail in some way, they cause much greater damage due to the increased social dependence on these systems. Insofar as these digital systems increasingly become the necessary condition for more and more effective social life, then failures in these systems are much more likely to create a ‘domino effect’ 15 of damage across society. 16
The scale of the impacts of the systemic threat of confidentiality breaches is still somewhat difficult to understand. As Zuboff (2019) highlights, building importantly on Beck's (1992) discussion of risks, sometimes, when we create new systems, they create new risks that cannot be easily integrated into our existing frameworks of understanding. As such, we cannot adequately apprehend the harm to social life that they pose. In this way, the drip, drip, drip of aggregated numbers of confidentiality breaches of Yahoo, Target, Sony, Facebook and Cambridge Analytica, and healthcare records, alongside the individual, seemingly idiosyncratic cases of identity theft and phone hacking, does not easily generate a determinate picture of harms, even if it is clear that damage is being done.
In the first half of 2018, data breaches compromised 4.5 billion records (ENISA, 2019: 65). Still, the cumulative impacts of such risks may be much more significant than the discussion of any single number of breaches can allow for. Ultimately, there is massive damage done to people's lives. The reflexive, continued perception of the need to ubiquitously use digital systems that contain our most intimate thoughts and details – but which we fundamentally cannot trust – threatens to create the risk of a new ‘iron cage’ of harm, social mistrust, and fundamental damaging of privacy, which is core to human development and interaction (Zuboff, 2019; see also Arendt (1958) and Bauman (2000)). The growing emphasis on ‘zero trust’ in cybersecurity (ENISA, 2021: 65, 86) again instigates social changes driven by the increasing ‘eggs in one basket’ infrastructural imperatives of surveillance capitalism. Digital systems that are so fragile while also being so foundational, thus increasingly make ‘zero trust’ a necessity, as even small, inevitable errors can have catastrophic consequences.
Likewise, with violations of availability and integrity, the tendency of surveillance capitalism towards an ever-constant replacement of social functions provisioned by non-digitally networked means with digitally networked means not only increases the likelihood of failures; it also exponentially increases the costs of these failures. Illustrative examples may aid in concretising some of these risks. The WannaCry cyberattack of 2016 affected over 100 countries worldwide. It was based on the identification and exploitation of a single key vulnerability in Microsoft software (Larson, 2017). It had a series of significant impacts across the world, with one-third of the UK's National Health Service (NHS) rendered inoperative, and over 1000 computers at Russia's interior ministry were disrupted, as were businesses, such as FedEx and Telefónica. In total, it is estimated that over 230,000 computers were infected by WannaCry (Thomas, 2019), and the costs of the attack are estimated at somewhere between $4 and $8 billion (Greenberg, 2018).
The following year (2017), the NotPetya ransomware was unleashed. It is considered the most costly attack yet, with estimates that it cost over $20 billion in damage while also shutting down key infrastructures (Clarke and Knake, 2019: 18). Exemplifying well the above principles of complexity, target richness, and weakest link, it was the exploitation of vulnerabilities in the update servers of a Ukrainian software company, Linkos, that provided a back door to thousands of computers in Ukraine and then through Ukraine to the rest of the world (Greenberg, 2018). The exploitation of this single vulnerability ended up causing approximate damages of $870 million to Merck, $400 million to FedEx, $384 million to the French construction company Saint-Gobain, $300 million to Maersk, $188 million to Mondelēz, and $129 million to Reckitt Benckiser (Greenberg, 2018). The case of the global shipping company, Maersk, is particularly instructive. The damages to their operations were extremely extensive, with the malware destroying all of the data containing the inventory of their ships around the world. While this ended up causing several days of delay in their global shipping processes, it could have actually been much worse. Luckily, earlier in the day, their Ghanaian office had went offline due to a blackout and so was not on the internet when the attack struck, thus leaving a single copy of the company's domain controller unaffected by the malware. Over the course of the following days, Maersk flew this backup to the UK and then slowly rebuilt their copy of their inventory of what was in their shipping containers (Greenberg, 2018; Curran, 2020). This case is particularly instructive because what limited further damage was having a non-digitally interconnected backup, an alternative path to ‘effective life’. And it is these alternative paths to effective life that surveillance capitalism's business model continually works to undercut.
Nevertheless, irrespective of how significant the costs of the failures of digital networks have been yet, this is just a fraction of what is intended in terms of digital interconnection and dependence. Core to the imperatives of surveillance capitalism is the ever-deeper integration of digital networks in necessary and fundamental aspects of our lives, and this involves creating new needs and new ways to mediate every aspect of life with digital interconnectedness. The growth in particular of IoT and digitally networked cities in terms of ‘smart urbanism’ presages much greater potential costs of systemic digital risk (see also Kitchin and Dodge (2019)). The recent growth of IoT has been exponential, and these trends are expected to continue. In 2020, globally, there were approximately 31 billion IoT devices, with it being estimated that this number will more than double, to 75 billion devices, by 2025 (Schiller et al., 2022: 2). One major company involved in IoT, Cisco, has identified a final goal of having 99% of existing devices as digitally connected and collection devices (Greengard, 2015: 13). Though it is always necessary to distinguish between a corporation's intentions and eventual outcomes, in one sense, it underestimates the potential proliferation of IoT because of its neglect of how these new technologies not only colonise existing goods but also proliferate needs for new objects and functionality (for colonisation, see Couldry and Mejias (2019)).
Conclusion
In light of Elon Musk's attempts to develop X, the ‘everything app’ (Bradshaw, 2022), the rise of surveillance capitalism and its associated ‘infrastructural imperialism’ (Vaidhyanathan, 2011), that is, its attempt to secure for themselves the position of universal intermediary for other social practices (Curran, 2020), raises serious questions from a risk perspective. The dual imperatives of surveillance capitalism, to collect and connect, massively intensify systemic risk. It remakes the basic infrastructures of our life in a more fragile way while also systematically working to eliminate other modes of provisioning of ‘effective life’ that can serve as a backup for when these fragile systems fail.
In light of the 2008 financial crisis and the COVID-19 pandemic of 2020, we have seen first hand the immense risks that come along with highly interconnected social systems. Yet, social learning from systemic risk has not been adequately transferred to other domains, in particular in terms of more effective regulation and governance of the risk potential of digitally networked industries. The primary business model that provisions our increasingly dominant modes of communication and coordination is beholden to surveillance capitalist imperatives. Yet, again and again, there is little more than cosmetic efforts to do security better, which while important are nowhere sufficient to adequately address the systemic, structural risks emerging from the basic business model of surveillance capitalism. Yet, as Zuboff (2019) has presciently shown, insofar as surveillance capitalism embodies powerful business imperatives, self-regulation or light-touch regulation is completely inadequate to address the massive risks embodied in the dual imperatives of collect and connect (see also Schneier (2018)).
A recent case illustrates this well. A cyberattack in 2021 on Colonial Pipelines led to the shutdown for several days of a series of pipelines that provide 45% of total oil supplies to the US East Coast (Sanger and Perlroth, 2021). The results were significant – panic buying led to significant lines in many states, price spikes, a state of emergency declared by President Biden, and a $4.4 million ransomware paid (Morrison, 2021), all of these from one leaked password (Kelly and Resnick-ault, 2021). This new ‘blended threat’ of combining cyberattacks that affect both computers and critical infrastructure is still only in its infancy, yet it has already shown itself to potentially have massive consequences (Sanger and Perlroth, 2021). 17 Moreover, in this case, it appears that much of the damage was not intended by the hackers; they had only tried to apply the ransomware to the billing system, but because of the complexity of Colonial Pipelines’ digital network, Colonial Pipelines felt the need to shut down the whole network. In a system as complex and fragile as our digital networks are becoming, consequences can cascade out of control beyond the intentions of either those who design the systems or even those who seek to exploit them (117th Congress, 2021; Morrison, 2021).
Yet, despite clear warning signs and other near misses (Perrow, 1984), the system of surveillance capitalism proceeds undiminished. This type of digital fatalism that enables the ‘move fast and break things’ fait accomplis of the digital economy needs to be superseded. Zuboff's (2019) depiction of the dystopian nature of surveillance capitalism when it works as intended can help disrupt this fatalism. Similarly, thinking more seriously about the risks of surveillance capitalism when it does not work as intended also can help to identify the necessity of moving beyond this dominant economic and social system.
Footnotes
Acknowledgements
Thanks to the participants in the ‘Digital Market Infrastructures’ panel at SASE 2022 and to the journal's two anonymous reviewers for their very helpful comments on earlier drafts of this paper.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Social Sciences and Humanities Research Council of Canada (grant number 403-2018-0263).
