Abstract
How have digital technologies affected the market logics and economization that constitute the underlying governing rationality of neoliberalism? This essay unfurls five theses that further develop the concept of technoliberalism, the intensification of neoliberalism through computational technology, in the context of the networked public sphere: (1) technoliberalism names the dominant governing rationality in cultures where digital computation technology suffuses everyday life; (2) technoliberalism replaces public, democratically accountable power with the private, technical expertise of digital technology firms; (3) technoliberalism focuses on contriving technical systems to change culture at the expense of democratic argument and deliberation; (4) technoliberalism intensifies the commodification of attention, resulting in undemocratic forms of “noopower”; and (5) technoliberalism standardizes subjectivities through grammatization. Each thesis complicates the prospects of democratic deliberation in the networked public sphere and articulates lines of communication research necessary for keeping democratic practices vibrant.
Introduction
On 12 September 2017, Apple’s senior vice-president for retail, Angela Ahrendts, took the stage during the annual Apple Keynote event. Ahrendts followed Apple CEO Tim Cook, who had, adhering to the generic expectations of these technology events, just extolled the virtues of new technical improvements across Apple’s offerings—most notably announcing details about the new iPhone X smartphone. As Ahrendts shared Apple’s evolving retail strategy, she noted, It’s funny, because we actually don’t call them “stores” anymore. We call them “town squares” because they’re gathering places for 500 million people who visit us every year … places where everyone’s welcome, and where all of Apple (2017a) comes together.
Ahrendts went on to detail how Apple Stores in major metropolitan areas were being redesigned with plazas to “meet up with friends” or “listen to a local artist on the weekends.” Inside, she explained, “we’ve designed a forum, a place for customers to create, collaborate, or just connect again with one another” (Apple, 2017a). The corporate absorption of the language of democratic gathering, paired with the simultaneous positioning of the subject as a consumer rather than a citizen, is legible as a species of neoliberal rhetoric. Apple’s shift in marketing strategy appears, if nothing else, as a cynical public relations stunt to co-opt democratic ideals and increase their own profits. Yet, Ahrendts’ announcement offers a slight twist on the typical neoliberal fare, one that foregrounds the role of digital technology in intensifying the particular governing rationality of late capitalism—what we will refer to as “technoliberalism” (Malaby, 2009; Pfister, 2018).
This twist can be detailed with further exploration of the activities envisioned for the new Apple Town Squares. An April 2017 press release announcing “Today at Apple” programming adumbrates the pivot from retail store to town square, quoting Ahrendts saying, We’re creating a modern-day town square, where everyone is welcome in a space where the best of Apple comes together to connect with one another, discover a new passion, or take their skill to the next level. We think it will be a fun and enlightening experience for everyone who joins. (Apple, 2017b)
What kinds of activities will these new Apple Town Squares offer? Photography workshops that teach tips on “capturing candids” or “building a brand on social media,” Teacher Tuesdays for school instructors “to learn new ways to incorporate technology into their classrooms,” opportunities for businesspeople “to engage with global and local entrepreneurs in the new Business Circuits program,” specialized Kids Hours teaching coding with Apple’s programming language or how to make music with Apple’s GarageBand, and lessons on community organizing from local radicals (Apple, 2017b). As if! The incongruity of that last farcical item with the activities actually outlined in Apple’s vision for its retail stores lays bare the fantasy that these spaces would be hospitable to the wide range of activities that sustain civic culture in town squares and other public spaces. “Everyone” who joins the activities at the Apple Town Squares will be welcome so long as they pursue individual, depoliticized, aesthetic projects.
While traditional town squares are often sites of public pedagogy, places where citizens go to learn from each other, Apple’s vision of public pedagogy is avowedly market driven. As technology journalist James Vincent (2017) explains it, “Apple frames these disciplines [of photography, coding, and music-making that will take place in the new town squares] as modern equivalents to the Medieval trivium—an essential educational resource that makes a person a person.” Ahrendts licenses this analogy in the Apple Keynote by announcing a new position within their retail structure: the Creative Pro, who helps bring Apple users’ visions to life using the latest Apple technologies. Ahrendts explains that “the Creative Pro is now to liberal arts what the G/genius has always been to technology” (Apple, 2017a). It isn’t clear if Ahrendts was referring to technological geniuses throughout history or to the “Geniuses” who provide technical support at Apple retail stores. In either case, Ahrendts’ intimation is that there is finally a use for those liberal arts degrees: they can be harnessed to Apple’s computational processing power to help customers take more striking portrait photos or develop a consistent Instagram brand. To put it mildly, this represents a departure from articulations of the liberal arts that reigned in earlier moments of liberalism. Traditionally, advocates have tied defenses of the liberal arts to participation in public life; indeed, the classic trivium of the liberal arts—rhetoric, grammar, and logic—formed the basis of education for centuries so that citizens could be capable of the argumentation and advocacy required for a robust public sphere. Now, following Ahrendts’ reasoning, the liberal arts are being repurposed for more market-oriented ends. Hannah Arendt’s (1958) “space of appearance,” a place where the trivium was practiced for democratic ends, is thus converted into Angela Ahrendts’ marketplace of appearance, where Apple’s new trivium is yoked to more efficiently moving consumer products. While land ownership was once a condition for participation in the public sphere, Apple’s vision of the public sphere presumes one has an Apple device.
What is lost in the articulation of Apple Stores as the new Town Squares is the sense of traditional town squares (understood as common meeting places for citizens) to support collective action oriented toward raising attention about systematic problems of public life. Private, corporate spaces do not have the legal protections of public spaces for freedom of speech, assembly, and protest. It is difficult to imagine Apple Town Squares tolerating a protest about labor conditions at Foxconn factories where Apple products are made. This is especially so given the recent court case that granted the Mall of America the authority to remove Black Lives Matters protesters from their premises (Brown, 2016; Moeckli, 2016). “This vision of civic action,” Megan Beam and Greg Dickinson (2015) argue in the context of the new lifestyle centers where many Apple Stores are housed: is made possible only and exclusively through the structures of consumer capitalism. And the modes of affiliation offered are not those of a community built of difference, but those built out of the smallest units imaginable: the individual and the nuclear family. (p. 180)
Absorbing the civic language of the town square implies that Apple Stores will carry democracy’s torch; yet, the constraints of the space, the kinds of groups and the activities that are encouraged to occupy that space, and the telos of the corporation all portend market activity rather than civic activity.
Technoliberalism presumes that digital technology can address liberalism’s shortcomings with an attenuated form of togetherness mediated by corporate platforms and focused on individual empowerment through hardware and software upgrades. The Apple Town Square is a synecdoche for this guiding assumption, promising both the educational curriculum and deliberative arena to manage what passes as civic life under the conditions of technoliberalism. Indeed, Apple’s rebranding of its retail empire brings into focus a number of these questions about the future of deliberation in the public sphere, especially in its digital and networked instantiation. What assumptions guide and authorize the convergence of digital technology, late capitalism, and the public sphere? What happens to democratic accountability as private, profit-driven technology firms supplant the power of public institutions? What kind of future does democratic public argument have in cultures that increasingly put faith in devices, social networks, and algorithms to make decisions? What new kinds of power are being exercised in the networked public sphere? What does market-driven digital technology do to how we think about subjectivity? We broach each of these questions, in turn, with the following five theses on technoliberalism and the public sphere.
Technoliberalism names the dominant governing rationality in cultures where digital computation technology suffuses everyday life
Naming is framing, and any frame that does not account for the central role of computational technology in shaping the flow of contemporary power obscures how market logics are intensified through digital technologies. In the same way that Foucault (2008, pp. 259–260), in The Birth of Biopolitics, implies that the disciplinary power associated with the modern political project was being complicated by the biopolitical power exerted by neoliberal rationalities, so too is the biopolitical power of neoliberalism now being altered by the kinds of power made possible through computational technologies. Thematizing this shift in the operation of power is one goal of this essay. To do so requires amending the term “neoliberalism” to foreground the technical dimensions of contemporary infrastructures of power.
We recognize the risk in working within this critical tradition. “Neoliberalism” has become an ill-defined “all purpose denunciatory category” (Flew, 2014, p. 51), collapsing a number of theoretical and historical analyses of the market and market logics into an all-encompassing category. As Wendy Brown (2015) notes, the term has become a “loose and shifting signifier” (p. 20). Nonetheless, as Brown (2015) explains, underlying these plural schemes of neoliberalism: is a distinctive mode of reason, of the production of subjects, a “conduct of conduct,” and a scheme of valuation. [Neoliberalism] names a historically specific economic and political reaction against Keynesianism and democratic socialism, as well as a more generalized practice of “economizing” spheres and activities heretofore governed by other tables of value. (p. 21)
Likewise, contemporary criticisms of neoliberalism often share an understanding of how market logics have become naturalized, fetishized, and dominant. Critiques of neoliberalism acknowledge how economization risks saturating human life in two senses. First, market logics gradually spread to previously non-economic realms; this is economization as converting non-economic into economic activity. Second, market logics relentlessly seek to become more efficient; this is economization of power, which, following Jeffrey Nealon (2008), always seeks efficiency gains. If neoliberalism naturalizes the market as the model for all human activity, then it invariably squeezes out other values. Neoliberal rhetoric which celebrates the individual, competition, work ethic, privatization, and social darwinism erodes faith in the commons, mutual interest, cooperative enterprise, and the very idea of the public good. As Robert Asen (2017) writes, a sense of the “public good constitutes a practice of articulating mutual standing and connection, recognizing that people can solve problems and achieve goals—and struggle for justice—through coordinated action” (p. 331). Mutual struggle, common ground, collective action: these are the values at stake in the struggle over neoliberalism.
What, then, is technoliberalism, and how does it depart from more typical understandings of neoliberalism? We define technoliberalism as the intensification of neoliberal governing rationalities through digital computational technologies (following Pfister, 2018). “Technoliberalism” usefully describes the capture of many digital technologies by market logics and, moreover, how digital technologies within this framework accelerate the economization of previously non- or under-marketized arenas of human activity. We take our inspiration from Thomas Malaby’s (2009) theorization of “technoliberalism” in his ethnographic study of Linden Labs, the maker of the quasi-virtual reality environment Second Life: The designers of digital space are shaped by a set of ideas about technology and authority that continue to resonate throughout the halls of Silicon Valley. I term this distinctive combination of distrust of vertical authority, faith in technology, and faith in the legitimacy of emergent effects as “technoliberalism,” which marks both its similarities to neoliberal thought but also its emphasis on contriving complex systems through the manipulation of technology. (p. 16)
The frame of technoliberalism acknowledges that digital technologies play a central role in the contemporary configuration of markets and market logics.
Technoliberal rhetoric is most visible as it is articulated to digital technologies, diffusing the terms, tropes, and frames of technoliberalism through sites of tech evangelism like Apple Keynotes, TED Talks, CES (the technology trade show formerly known as the Consumer Electronic Show), and through commercial advertisements across different media. But technoliberal rhetoric is not just a communicative repertoire deployed to justify shiny new consumer products or better living through digital technology, for it appears more subtly as a governing rationality that guides the possibility of deliberation in the networked public sphere. As Malaby (2009) points out, [T]echnoliberalism is not simply neoliberalism in another guise; there are core differences. While Adam Smith conceived of a market that was in a way a natural and ineradicable part of the landscape (based on the human propensity “to truck, barter, and exchange”), and neoliberal thought continues to see the market in this way, technoliberalism holds up the idea that such complex systems can be contrived, in their entirety, through digital platforms. The liberal component is the imagined freedom of individuals to perform as such within designed systems, generating collective effects that are thereby legitimate. (p. 133)
Google’s (n.d.) mission to “organize the world’s information and make it universally accessible and useful” encapsulates the spirit of technoliberalism. It presumes that a system can be engineered to make sense of the world’s information (that, indeed, the world could be understood as a composite of “information”), it presumes that corporate entities are capable and justified in deciding the parameters of “usefulness,” and it presumes that if information can be made both useful and universally accessible, than this near perfect market form could—and should—steer individual and collective judgment.
Technoliberalism assumes the virtues of horizontalism, which emphasizes how internetworked technologies “flatten,” to use Thomas Friedman’s (2007) shopworn metaphor the playing field, in ways that make emergent judgments more just. For many of Malaby’s (2009) Linden Lab employees, the world is imagined as “a level playing field populated by other individuals pursuing their enlightened self-interest” (p. 47). In this networked, technoliberal imaginary, institutions are irrelevant, social divisions are infinitely surmountable, and class distinctions have been dissolved. The technoliberal imaginary envisions a world devoid of power, which allows adherents to believe that “the emergent properties of complex interactions enjoy a certain degree of rightness just by virtue of being emergent” (Malaby, 2009, p. 56). Symptomatic of this phenomenon is the largely celebratory rhetoric attached to Big Data, algorithms, and artificial intelligence. If the world is conceived as devoid of power relations, then technoliberals will understand democratic deliberation as a challenge of aggregating more data and improving algorithmic processes to shape human interaction. The failings of previous governing regimes can be corrected for—and new problems can be reflexively adapted to—by digital technologies that operate in more ubiquitous, granular, and networked ways. However, such approaches conceal—or exacerbate—continuing inequality along a number of axes: economic, political, social, communicative. Thus, critics of technoliberalism must trace how democratic deliberation, oversight, and decision-making are supplanted by technological fixes instead of structural ones.
Technoliberalism replaces public, democratically accountable power with the private, technical expertise of digital technology firms
Neoliberalism’s “market regime of governance” (Asen, 2017, p. 330) is being transformed, under technoliberalism, into a regime of governance that is not just marketized but thoroughly technologized. (Technology, of course, has always been at the heart of liberalism, but it has not always been so central to the broader imaginary.) Digital technologies both supplant and supplement state power through a digitization of government services and, more broadly, the public sphere. The effects are both granular and institutional. Both how we do things and who we trust to do them have been revised. Land-grant universities established “to promote the liberal and practical education of the industrial classes” (emphasis added) per the Morrill Act of 1862 are increasingly constrained by metrics that privilege the efficiency of online courses and the curriculum of science, technology, engineering and mathematics (STEM). Coding is becoming the new grammar, and Silicon Valley coding camps, Kahn Academy, or MOOCs are often seen as preferred sites to master these new language rules. We hail a ride with an app, and we are told that a tech company is better equipped to provide this service than taxicab companies and public transportation authorities (which requires looking past how ride-sharing companies regularly lose billions of dollars a year). We are promised that an anonymous software engineer with a computer can successfully and ethically redesign the fundamental building block of our economic infrastructure, so we contemplate investing in Bitcoin.
As Frank Pasquale (2017) explains, “When state authority contracts, private parties fill the gap. That power can feel just as oppressive, and have effects just as pervasive, as garden variety administrative agency enforcement of civil law.” And, indeed, because state functions offer one largely untapped source of revenue, large digital firms “aspire to displace more government roles over time, replacing the logic of territorial sovereignty with functional sovereignty. In functional arenas from room-letting to transportation to commerce, persons will be increasingly subject to corporate, rather than democratic, control” (Pasquale, 2017). The ascendant corporate control of public life risks a kind of neo-feudalism, where contracts with digital firms replicate the relationship between vassal and noble. Robert Lee Hale’s note on this continues to ring true: “There is government,” or, in a more Foucauldian vein, governmentality, “whenever one person or group can tell others what they must do and when those others have to obey or suffer a penalty” (Qtd. in Samuels, 1992, p. 184). The argument that shrinking the state will increase freedom is only sensible if corporate power is not interpreted as a risk to that freedom.
The presidential elections of 2016 evidence the corporate governmentality exercised by private technology firms. Facebook, Google, Twitter, and Instagram shaped deliberation and voting practices, as in prior elections. However, high-profile controversies relating to hyper-targeting of advertisements to prospective voters, lurid tales of Pizzagate and pee tapes, Russian troll farms, and the circulation of memes with the loosest of connections to facticity, all called into question the outsized role that digital platforms had on the results of the election. The early hope for the networked public sphere was that the affordances of speed, reach, and anonymity would aid debate and deliberation, allowing citizens to participate in a multiplicity of digital enclaves that would link back into the primary attention backbone of the digital media ecosystem (Benkler, 2006). The structure of the networked public sphere would, as a consequence, be more organic, reflexive, and adaptive than the corporate-controlled, mass-mediated public sphere of the late 20th century. Bots, computational propaganda, and the algorithmic patterning of attention, all of which have intensified since the 2012 election, raise questions about how vulnerable the networked public sphere is to orchestration by powerful interests who can resist democratic entreaties.
Antoinette Rouvroy and Thomas Berns (2013) characterize this kind of computational orchestration of power as “algorithmic governmentality,” a political rationality “founded on the automated collection, aggregation and analysis of big data so as to model, anticipate and preemptively affect possible behaviours” (p. 10). Big Data is the condition of possibility for algorithmic governmentality, but because private interests are the primary actors in the Big Data game, the algorithmic governmentality they are able to leverage is more substantial. As Rouvroy and Berns (2013) detail, in the algorithmic governmentality era one is rather witnessing a colonization of public space by a hypertrophied private sphere … the systematic capturing of any available human attention [is] for the benefit of private interests (the attention economy), rather than to foster democratic debate and serve the general interest. (p. 3)
The circulation of the Pizzagate story—a thoroughly unsubstantiated claim that Hillary Clinton was at the center of a Democratic Party child sex-trafficking ring run out of a pizzeria in Washington, DC—is one example of how algorithmic governmentality extends the power of private interests. Since platforms like Facebook prioritize “engagement” above all else (Birkbak & Carlsen, 2016), their algorithms are structured in such a way as to advantage the attention-getting—and what could be more attention-getting than the claim that Clinton was a child sex trafficker? Although the public sphere has always been imbricated with private power, the networked public sphere is fundamentally reliant on private power. In the early modern public sphere, newspapers owned by private interests were discussed at sites like coffeehouses which, while owned by a private individual, exercised less oversight on the topics and style of the conversation (in part because the participants had already self-selected for social identity) (Habermas, 1962/1989). In the networked public sphere, participants largely self-select for identity, and discuss civic affairs on sites run by for-profit industries that algorithmically shape the conversation in such a way as to maximize profits. Complicating this phenomenon is that the affordances of the platforms, like global reach, untraceable anonymity, and proprietary algorithms, make governmental oversight and regulation more difficult.
Algorithmic governmentality aims to supplement the medium of the networked public sphere, but some digital technology corporations are making efforts to outright supplant democratic governance. In September of 2017, Amazon announced plans to build a second headquarters (HQ2), issuing a call for proposals from cities interested in hosting the tech giant. The proposals reveal that metropolitan governments are willing to relinquish many dimensions of democratic governance to private corporations. Chicago offered to redirect as much as 100 percent of income taxes incurred by Amazon employees back to Amazon. Fresno, California offered to place all of the taxes and fees generated by Amazon into a jointly administered Amazon Community Fund. Amazon would help decide what public projects would be funded. City projects chosen by Amazon would feature a “This project brought to you by Amazon” label (Holder, 2017). Commenting on the policy, senior fellow at the Institute of Taxation and Economic Policy Matt Gardner said, “Diverting income tax revenues—or any tax revenues—directly to privately held corporations I think is a step away from democracy; a step away from responsible governance. And a step toward having unelected government officials like Jeff Bezos” (Qtd. in Holder, 2017). Stonecrest, Georgia’s offer to rename a 345-acre neighborhood Amazon, Georgia seems innocuous in comparison to plans that give control of public funds to a private corporation that operates without citizen input or oversight (Holder, 2017).
As exemplified by Amazon, the private power of digital corporations is not subject to the same democratic oversight that previous corporations have been subject to. Governments and policies are shaped by fundamentally different economic interests that make regulation in the public interest more difficult. “Detroit’s golden-age big three,” GM, Ford and Chrysler, have been replaced by Silicon Valley’s behemoths, Facebook, Apple, and Google (Madrigal, 2017). Alongside Amazon and Microsoft, the “Silicon Valley big three” comprise the top five most valuable companies based on market capitalization. These technology companies differ from the more traditional business models of the auto companies in ways that complicate democratic oversight. First, the majority of their revenue comes from overseas (Madrigal, 2017). Absent a global regulatory authority, the ability to constrain these digital firms is therefore limited. Second, technology companies employ a fraction of the number of people that manufacturing companies do (Madrigal, 2017). Whereas a central tenet of Fordism was to pay wages high enough to create consumers, today’s digital corporations lack both the workforce and tangible products to operate under the same principles. The “gig economy” means that the political imperatives associated with substantial, lifelong employment, such as unions and pensions, have dissipated. The dominant market interests have become unmoored from the sovereign entailments of both a national body politic and a national body laboring. The distribution chains for technology often appear seamless, and this ease of transgressing borders mitigates a need for the political expertise and diplomacy required to negotiate everything from union contracts to trade policies.
We have thus far detailed how new technologies are being developed to put a squeeze on the state from the outside. A parallel digital transformation—read, economization—of governmental institutions and processes is unfolding from within, as well. “The digital economy,” Nick Srnicek (2016) writes, “is becoming a hegemonic model: cities are to become smart, businesses must be disruptive, workers are to become flexible, and governments must be lean and intelligent” (p. 9). The state is obeying this imperative, redesigning itself with the aid of digital technologies. Barack Obama’s Digital Government initiative (Obama White House, n.d.) breathlessly exclaimed: Today’s amazing mix of cloud computing, ever-smarter mobile devices, and collaboration tools is changing the consumer landscape and bleeding into government as both an opportunity and a challenge. New expectations require the Federal Government to be ready to deliver and receive digital information and services anytime, anywhere and on any device. It must do so safely, securely, and with fewer resources. To build for the future, the Federal Government needs a Digital Strategy that embraces the opportunity to innovate more with less, and enables entrepreneurs to better leverage government data to improve the quality of services to the American people.
The strategy embraces economization, figures US citizens as customers, and centralizes the interests of entrepreneurs. Copy from Accenture Consulting on the Digital Government Strategy echoes these themes: “digital can deliver superior customer satisfaction while lowering costs. After all, this is key to why digital is so transformational” (Zinner, n.d.). In a mere two decades, in the movement from the Zapatistas to Zappos, the transformational power of digital technology has been thoroughly domesticated as a mode of efficient service delivery.
Technoliberalism focuses on contriving technical systems to change culture, at the expense of democratic argument and deliberation
Many technoliberals dismiss the processes that subtend democratic politics as a distraction at best and counterproductive at worst. Peter Thiel (2009), the co-founder of PayPal and an influential venture capitalist, asserts that “we are in a deadly race between politics and technology.” Thiel views politics as unproductive and technology as the only power that is responsive to “the choices of individuals.” Stewart Brand, a noted technology visionary whose 1960s Whole Earth Catalog influenced Steve Jobs, similarly quipped that when you want to change human behavior “you don’t bother with politics … [you] go after the tools … tools make new practices, and better tools make better practices” (Bennetsen, 2011). Kevin Kelly, former executive editor of Wired magazine, characterizes this perspective as “the tool-view of the world, that tools are more powerful than politics that eventually morphed into code is more important than law” (Bennetsen, 2011). These claims assume that technology and the political can be separated—a feature of liberalism that technoliberalism intensifies.
If technoliberalism doesn’t bother with politics, how does it enact change? As Fred Turner puts it, “Engineers try to do politics by changing infrastructure. That’s what they do. They tweak infrastructure. It’s a little bit like an ancient Roman trying to shape public debate by reconfiguring the Forum.” (Don’t Be Evil: Fred Turner on Utopias, Frontiers, and Brogrammers, 2017). Jaron Lanier (2010, p. 5.), a pioneer in the development of virtual reality, admits that “[t]echnologists don’t use persuasion to influence you—or, at least, we don’t do it very well.” Lanier (2010, p. 5–6.) explains, We make up extensions to your being, like remote eyes and ears (webcams and mobile phones) and expanded memory (the world of details you can search for online). These become the structures by which you connect to the world and other people. These structures in turn can change how you conceive of yourself and the world. We tinker with your philosophy by direct manipulation of your cognitive experience, not indirectly, through argument. It takes only a tiny group of engineers to create technology that can shape the entire future of human experience with incredible speed.
In lieu of exercising rhetorical prowess in the public sphere, technoliberals design complex systems to redirect behavior. Change infrastructure, change the world. The heady expansion in the power of digital technical systems enables the conceit that technoliberalism can bypass the messiness of democracy. As Malaby (2009) observed at Linden Labs, software engineers (the captains of technoliberalism) work to build new systems that can replace existing ways of doing things. Bitcoin, AirBnB, Uber, and similar entities are all symptomatic of this approach. From the tool-view of the world, politics, argument, and deliberation are doomed to failure because they are grounded in the mistakenness of human perception. Human perception is a bug, optimally reprogrammed through changes in the technical infrastructure that nudge individual and collective practice.
A cornerstone of the contemporary technical infrastructure is algorithmic decision-making, and efforts to improve this technical infrastructure usually track along what danah boyd (2018) calls “algorithmic solutionism.” From results served up by a search engine to the updates seen in a social media feed, we experience algorithmic decision-making daily. In a 2017 Pew Research survey of technology experts regarding the future of free speech online, Steven Waldman, founder and CEO of LifePosts, explained that “venture-backed tech companies have a huge bias toward algorithmic solutions that have tended to reward that which keeps us agitated” (quoted in Rainie, Anderson, & Albright, 2017). This bias impacts more than the regulation and promotion of online speech as algorithms are being incorporated into a myriad of decision-making venues. Human resource professionals turn to Applicant Tracking Systems for judgment on what applicants are qualified. School counselors trust predictive analytics when advising students where to apply to college. Police departments employ Beware, a threat scoring software, to help determine the best course of action when responding to 911 calls (Madden, Gilman, Levy, & Marwick, 2017). Wherever judgment is needed and data are collected, algorithms intervene.
In a testament to how entrenched the tool-view of the world is, when systems fail, technologists often present more technical fixes as solutions. This normative rationality is reflected by headlines such as: “These simple design tricks can help diminish hate speech online” (Hao, 2017). Mike Masnick, founder of the technology blog Techdirt, explains that when thinking about how to stop hateful speech on the blog, he pondered: “Can we sort of nudge people in a better direction without being heavy-handed about it?” (Qtd. in Hao, 2017). Reflecting on how to prevent the internet from being used to spread hatred and violence, former executive chairman of Alphabet/Google Eric Schmidt (2015) wrote in a New York Times editorial that “[w]e should build tools to help de-escalate tensions on social media—sort of like spell-checkers, but for hate and harassment.” In the past, a liberal arts education and encounters in the public sphere may have equipped you to constructively engage in tense exchanges, but in the near future, a trip to the Apple Store may just teach citizens to code around opinions they disagree with instead.
Technoliberalism intensifies the commodification of attention, resulting in undemocratic forms of “noopower”
Examining how capital, as a primary form of power, is conceived and deployed illustrates how technoliberalism economizes attention. In short, liberalism conceives of capital in monetary terms, neoliberalism expands the sense of capital to include social capital, and technoliberalism understands monetary and social capital through attentional capital. Competition for attention, in other words, mediates individual acquisition of monetary and social capital (Lanham, 2006). The conceit of the “attention economy” underlines this new stage in capital’s commodification of attention through digital technologies. Sean Parker, the founding president of Facebook, is one of many technology executives to implicitly note the growth of a noopower regime, disclosing that the thought process that went into building these applications, Facebook being the first of them … was all about: “How do we consume as much of your time and conscious attention as possible?” And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you … more likes and comments … It’s a social-validation feedback loop. (Qtd. in Allen, 2017)
Instagram celebrities and YouTube stars are able to convert attention into monetary capital in obvious ways through sponsorships, paid posts, and various other forms of patronage (Marwick, 2013); less famous subjects trade attentional capital through likes and retweets for more subtle advantages in social and monetary capital. While the competition for attention is an old human practice, the centralization and commodification of attention as the heart of economic processes is new.
The turn to attention which theorizing technoliberalism invites orients us to the centrality of “noopower” and “noopolitics” in the networked imaginary—the politics of the nous, the mind; the loops between attention, perception, sensation, memory. Maurizio Lazzarato (2006) opened up a consideration of the noopolitical in the contemporary context by situating power over the nous as fundamental to the control society: One could define the new relations of power which take memory and its conatus (attention) as their object, lacking a better term, as noo-politics. Noo-politics (the ensemble of the techniques of control) is exercised on the brain. It involves above all attention, and is aimed at the control of memory and its virtual power. (p. 186; see also Terranova, 2007)
The noo-political “commands and reorganizes the other power relations” of discipline and biopolitics (Lazzarato, 2006, p. 187). As Rob Gehl (2014) writes, noopower is “the action before action that works to shape, modulate, and attenuate the attention and memory of subjects” (p. 23). Noopower is a new word for an old term, rhetoric, long concerned with the getting, sustaining, and transformation of attention and related terms like perception, sensation, and memory (Pfister, 2014). What is different in the context of technoliberalism is, as Gehl (2014) writes, the institutionalization of noopower in social media platforms, digital technology companies, and other “attention merchants” looking to get inside our heads (Wu, 2016). Moreover, as Zeynep Tufecki (2018) writes, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter … These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself.
While noopower has always existed, even as it was untheorized explicitly as a form of power, the centralization of noopower in just a few technology firms is unprecedented.
Developments in hardware technology have made possible the commodification of attention at a more granular level, with the iPhone leading the charge. “Throughout the iPhone’s development,” writes Eric Andrew-Gee, (2018), “Mr. Jobs fought to proceed without a keyboard, making the screen larger and more immersive … The screen’s unique power to absorb attention quickly became clear.” As venture capitalist Roger McNamee (2017) explains, “Thanks to smartphones, the battle for attention now takes place on a single platform that is available every waking moment.” The immersive qualities of the smartphone are merely appetizers for the entree of augmented and virtual reality technologies, which promise to capture, maintain, track, and control attention in ways impossible with a handheld device.
While hardware is an infrastructural condition for the noopower exercised by technology companies, software innovations further capitalize on the mobility and tactility of these devices to shape attention. Tristan Harris, a former Google software developer who ultimately became frustrated with his employer’s efforts to dominate users’ attention, likens the design of email and social networking sites to slot machines: in both cases, users pull down a screen or a lever to get new results, and the burning question “What am I going to get next?” keeps them coming back for more (Harris, 2014; Lewis, 2017). Facebook’s notifications system began as a light blue number in the upper right hand corner of the user interface, but was changed to a red notification because it attracted more user attention. “Red,” Harris adeptly observes, “is a trigger color” (Qtd. in Lewis, 2017; see also Luckerson, 2017). Efforts to commodify attention are becoming increasingly sophisticated. Snapchat, for example, informs users: the instant a friend begins typing a message to them—which effectively makes it a faux pas not to finish a message you start … [T]he app’s Snapstreak feature, which displays how many days in a row two friends have snapped each other and rewards their loyalty with an emoji similarly incentivizes paying attention to the communicative interactions afforded by the app (Bosker, 2016).
Eric Andrew-Gee reports that: it’s common knowledge in the industry that Instagram exploits this craving by strategically withholding “likes” from certain users. If the photo-sharing app decides you need to use the service more often, it’ll show only a fraction of the likes you’ve received on a given post at first, hoping you’ll be disappointed with your haul and check back again in a minute or two. (Andrew-Gee, 2018)
Both Snapchat and Instagram are reading from a script that blends technology and behavioral psychology, distilled in B.J. Fogg’s (2002) Persuasive Technology, which leverages assumptions about social reciprocity to design interfaces that maximize user attention. As augmented and virtual reality technologies become more commercially viable, so are efforts to transform the advertising model in a way that quantifies attention more explicitly. The cryptocurrency Gaze Coin (Gaze Coin, n.d.) promises to do just this by monetizing users’ attention in augmented and virtual reality contexts through eye tracking: “Gaze control” is a tool used by creators of virtual worlds that allows audiences to trigger content by looking in the direction of the content—usually initiated by spatial audio—only when they are ready to experience it. In this way audiences are empowered to engage with the content in their own time and according to their true interests—giving them agency and freedom while immersed.
This copy from Gaze Coin’s website appropriates the language of freedom to describe a more controlled environment in which even the glance is monetized.
Technoliberalism standardizes subjectivities through grammatization
If, as Asen (2017) suggests, “a neoliberal public disregards difference and discounts inequality to reassert a singular and universal model of publicity” (p. 337), technoliberalized publics intensify difference and valorize particularities to demonstrate the universal prowess of technological capacities. Indeed, the neoliberal universal model of publicity as imagined by Asen is, we suggest, a byproduct of technological limitations. Historically, to execute an effective marketing campaign, companies and politicians appealed to a presumed universal subject in part because the technology did not exist to efficiently micro-target individuals. But today, as noted by Zeynep Tufekci (2014), any “campaign knows a great deal about every individual.” Instead of the “little capitals” of Wendy Brown’s (2015, p. 36) neoliberalism, subjects under technoliberalism are characterized as stores of information that can be leveraged and monetized for professional, economic, social, and/or cultural advancement. This kind of micro-targeting based on particularized dimensions of one’s subjectivity is a departure from early thinking about how the language of the network provided an anti-essentialist vocabulary for subjectivity (Castells, 1997; Papacharissi, 2011). If subjectivity was the result of being subjected to different networks of attention, experience, and power, and any particular subject was just a point of articulation in a larger web of these conjoined networks, then individual experiences in distributed-but-loosely-connected internetworked contexts provided a metaphor for the nature of subjectivity itself (Davis & Ballif, 2014). Thinking about subjectivity as itself networked provided one avenue to think about how new technologies might create the conditions for what Rosi Braidotti (2013) identified as “an expanded relational self” (p. 60).
We hope that the possibilities for this expanded relational self are still alive but are more sanguine about how contemporary technoliberalism “grammatizes” subjectivity in a way that commodifies the nodal dimensions of the subject. Grammatization refers to the process by which something continuous becomes made discrete (Stiegler, 2002; Tinnell, 2015). The classic example here is the invention of the alphabet, which took speech—previously undifferentiated at the phonetic level—and turned it into signs that corresponded with sounds. The grammatization of the image provides another example (Pfister & Woods, 2017). In an analog image, like a painting, there are continuous gradients of color that defy clear differentiation. However, digitizing that same image turns those continuous gradients into discrete pixels—those discrete pixels can then be “grammatized” by a hexadecimal code that turns the color “lime” into “00FF00.” Grammatization imposes a grammar—a way of being—on a process.
Under technoliberalism, subjectivity, and thus our conception of agency, is undergoing an intensive process of grammatization through digital technology. Grammatized subjects are identified as a composite of likes, favorites, and reactions; of media artifacts, status updates, and tweets that catch one’s attention, favor, and disdain; of social networks that articulate relations and belonging; of images and videos that disclose increasingly intimate (and increasingly computer recognizable and searchable) details about everyday life; of consumer preferences; and so on. Indeed, the existence of different, discrete, algorithmically analyzable constituents of subjectivity are the premise of the business models for most technology firms. By grammatizing the subject, these firms make visible discrete interests and practices that can be linked to consumer goods, which facilitates targeted advertisements. Instead of the radical potential for “postmodern” subjectivity posited by critics of modernity’s liberal unified subject, the grammatized subject co-opts this fragmented, plural being through commodification. The discrete categories that are now seen as constituting the subject are objectified as these constituents of being are converted into objects that can be analyzed, calculated, aggregated, and controlled. Subjectivity itself is thus converted to one more resource that can be capitalized; the ultimate human resource turns out to be human preferences and activities that can be informationalized.
Consider, for example, how filling out a basic profile on a social networking site like Facebook parses discrete components from a holistic subject: About, Work and Education, Places You’ve Lived, Contact and Basic Info, Details About You, and Life Events constitute Facebook’s current profile grammar. Another grammar is possible. What if the default Facebook categories were Things I’m Curious About, Genesis of My Political Radicalism, or What I Despise? That these examples are difficult to imagine suggests the extent to which Facebook has successfully established a grammar that sets the parameters for how subjects define themselves. And, to make the obvious point, Facebook’s grammar serves their corporate purpose, as this basic information about the subject—age, gender, ethnicity, marital status, location, work details, life events, interests—serves as a baseline for their advertising network. Facebook’s grammar, in turn, encourages certain genres of “sharing”—births, deaths, achievements, failures, proclamation, consumption—that further specifies for their advertising algorithm the kind of subject one is.
Asen (2017) suggests that “to the degree to which it exerts force across a network, a neoliberal public obfuscates the diversity of the network in which it circulates” (p. 337). But, given today’s affordances of technology, a technoliberal public lionizes a particular model of diversity built on standardization of differences. As Lanier (2010) understands the process, “Every element in the system—every computer, every person, every bit—comes to depend on relentlessly detailed adherence to a common standard, a common point of exchange” (p. 15). Without standardization, technology fails because the demands of facilitating interoperability between grammars is inefficient (Kerstan, Kretschmer, & Muehlfeld, 2012). Technoliberalism celebrates the growing circumference and strength of the network but overlooks the fact that the holes in the net through which we each must pass have been cut to only one shape and size. Through these logics of standardization, individual subjects become more interchangeable—fungible—with their singularities increasingly obscured (Stiegler, 2014).
Technology firms, such as Facebook, Twitter, and YouTube, market their standardized grammar to potential advertisers as a micro-targeting of consumers (which is simply another sign of economization: more attention with less advertising dollars). This micro-targeting aids the construction of “filter bubbles” (Pariser, 2011); indeed, the gathering of like-minded subjects is much more difficult without the grammatization of interests that can be linked through digital technology. In some ways, this algorithmically-driven advertising model—developed organically out of the categories subjects create—could be understood as superior to the older, broadcast model that relied primarily on stereotypes about gender, race, class, and sexuality. On closer look, however, grammatization of the subject may result in more intensified but subtle forms of discrimination with negative consequences for the networked public sphere. For example, Pro Publica’s research into Facebook’s advertising algorithm in the wake of the 2016 US presidential election revealed that one could micro-target advertisements to “Jew haters” because “people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a ‘field of study’.” “Facebook’s algorithm,” the Pro Publica researchers found, “automatically transforms people’s declared interests into advertising categories” (Angwin, Varner, & Tobin, 2017). Most famously, perhaps, Donald Trump’s campaign hired the data firm Cambridge Analytica, which used complex psychogeographic information to develop “predictive personality” models that were used to develop tailored, automated messaging to depress certain voter groups and turn out others (Anderson & Horvath, 2017). These so-called “dark posts,” which are visible only to the targeted, means “there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads” (Anderson & Horvath, 2017). Making public scrutiny about campaign messaging like this more difficult risks fundamental democratic principles of openness and transparency.
The code-ification of difference further complicates the relationship between the subject and the diversity of the network in which it circulates. In other words, technoliberal grammars make legible some subjects and not others. In 2015, a black software programmer observed that the auto-tagging feature in Google Photos tagged a picture of he and some friends as “gorillas” (Simonite, 2018). The sensors used in hands-free soap dispensers and heart rate monitors, as well as those accompanying fitness trackers and Apple Watches, are optimized for white skin (Fussell, 2017). Apple’s new iPhone X Face ID system, which claims a one-in-a-million chance that it can be unlocked by someone other than the owner, apparently has problems in distinguishing between some Asian faces (Zhao, 2017). A Nikon camera that alerts the photographer when someone blinks in a photograph flags an Asian subject as blinking ((jozjozjoz), 2009). A new AI program claims to be able to identify gay and lesbian people based on “fixed (e.g., nose shape) and transient facial features (e.g., grooming style),” developed from a database that is itself grounded in a (questionable) prenatal hormone theory of sexual orientation (Wang & Kosinski, 2017). Even the platforms that host deliberation in the networked sphere are coded in such a way as to presume a level playing field where a subject is presumed to be able to engage in an equal contest of wits (Crawford, 2002). This ignores the ways in which inequitable distribution of power is reflected, and even amplified, in digital contexts, as high profile instances like Gamergate, and everyday instances of harassment and microaggression, shape who can speak and be taken seriously as well as what topics can be spoken about.
The technoliberal subject feels unique, but this customization may well steer politicization away from more radical revisionings of the social order or a restoration of public trust and public goods. Technoliberal agency is often envisioned as centrally focused on managing one’s own information and the information within one’s network. Indeed, blockchain advocates argue that public trust is inefficient and unverifiable and that decentralized stores of information will offer new venues for transparency and verifiability of everything from currency to identify to coffee. For example, a visit to Civic.com (note the convergence of the civic and the commercial in the URL) will not return a guide to your local election but a company that utilizes blockchain to provide individuals with a tool for sharing their identity in customizable digital units. Second, technoliberal agency is articulated as a convening of information flows from other technoliberal subjects assembling likes, retweets, digital petitions, and other digital signals that purport to condense and convey publics’ opinions to other publics and administrative units. Technoliberal subjects consider social change as essentially involving the sluicing of the right streams of information toward the right decision-makers. Certainly, these kinds of convenings of technoliberal agency through hashtag campaigns like #metoo and #blacklivesmatter have shaped publics’ agendas. Exploring how these movements have sparked larger structural changes–or not–is a necessary task for critics and theorists of technoliberalism.
Conclusion: back to the trivium?
Our articulation of technoliberalism maps the various ways in which the condition of digitality intensifies neoliberal governing rationalities in the context of public sphere deliberation. Although a certain amount of cynicism about the prospects for resistance to a technoliberalized public sphere might reasonably flow from our diagnosis of private market-driven noopower, our goal was actually to show how contingent this framework actually is. Technoliberalism may be the default public philosophy of digital culture, but it is neither natural nor inevitable. Rather, technoliberalism was built, layer of code by layer of code, and it can be rebuilt with a more democratic and deliberative ethos, layer of code by layer of code. The task for contemporary critics and theorists is to reimagine the digital interfaces that pattern our daily lives, much as Safiya Noble (2018) developed the blueprint for a search engine guided by public-oriented organizations rather than for-profit companies like Google.
How might we harmonize digital technologies with another public philosophy that centers dignity and freedom rather than profit and control? This is the question that must drive a critical theory of digital culture aiming to supplant technoliberalism, and any answer must flow from the deliberative processes that a healthy public sphere historically facilitates. Perhaps, then, a return to some of the educational principles that subtended the early modern public sphere is warranted. Might a return to the medieval trivium, updated in the context of digital media ecologies, provide resources to contest the naturalization of technoliberalism and to invigorate deliberation within and about digitality? Instead of Apple’s pseudo-trivium of photography, coding, and music-making, the medieval trivium of rhetoric, grammar, and logic are architectonic arts that undergird all creative impulses. Studying the trivium familiarizes one with the contingency of cultural production, whether that is an oration or a piece of code. Of course, for the trivium to be relevant in a different cultural context requires conceptualizing rhetoric, grammar, and logic differently. Rhetoric, in our view, is not just an act of intentional persuasion, but an appreciation of how bodies affect each other in ecologies of contingency. Grammar, in our view, is not just about Oxford commas and proper preposition use, but a more expansive appreciation for the elemental building blocks of technical infrastructures. Logic, in our view, is not just a process of lining up p’s and q’s in formal proofs, but a way in which order and meaning is imposed on the world. This understanding of the trivium could transform not just classroom teaching, but the public pedagogies that critics and cultural producers use to contest dominant systems of thought and develop alternatives to them through interventions in the public sphere. To understand that technoliberalism prescribes certain kinds of affectivity, privileges certain kinds of technical infrastructures, imposes a certain kind of ordering of the world—and that it could be different—is the first step toward thinking beyond the dominant rationalities that govern us.
Footnotes
Acknowledgements
The authors thank Ira Allen and Rob Asen for their feedback on the ideas in this manuscript.
