Abstract
Automated technologies populating today’s online world rely on social expectations about how “smart” they appear to be. Algorithmic processing, as well as bias and missteps in the course of their development, all come to shape a cultural realm that in turn determines what they come to be about. It is our contention that a robust analytical frame could be derived from culturally driven Science and Technology Studies while focusing on Callon’s concept of translation. Excitement and apprehensions must find a specific language to move past a state of latency. Translations are thus contextual and highly performative, transforming justifications into legitimate claims, translators into discursive entrepreneurs, and power relations into new forms of governance and governmentality. In this piece, we discuss three cases in which artificial intelligence was deciphered to the public: (i) the Montreal Declaration for a Responsible Development of Artificial Intelligence, held as a prime example of how stakeholders manage to establish the terms of the debate on ethical artificial intelligence while avoiding substantive commitment; (ii) Mark Zuckerberg’s 2018 congressional hearing, where he construed machine learning as the solution to the many problems the platform might encounter; and (iii) the normative renegotiations surrounding the gradual introduction of “killer robots” in military engagements. Of interest are not only the rational arguments put forward, but also the rhetorical maneuvers deployed. Through the examination of the ramifications of these translations, we intend to show how they are constructed in face of and in relation to forms of criticisms, thus revealing the highly cybernetic deployment of artificial intelligence technologies.
This article is a part of special theme on The Turn to AI. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/theturntoai
Introduction
It has already been a few years since media and pundits of all sorts announced the onset of a full-fledged “AI revolution,” the result of a noticeable effervescence of newly developed machine learning techniques as well as the urgency of many firms to weave their ever-smarter innovations into the fabric of everyday life. And indeed, in this regard the field of artificial intelligence (AI) has made phenomenal strides, not only in the digital realm of platforms but in our material, immediate environments, which have become increasingly subject to the same bleakly inductive and classificatory matrices. Conversely, the social sciences and humanities have been characteristically late to the party. Slow to react to these ongoing transformations, they could, for the most part, still be portrayed as disorganized and unprepared for what awaits. The question then is what would be required to develop better, more encompassing and programmatic views? A first set of such conditions deals with the contextualization of both our disciplines and the technologies they are meant to assess. Fields such as Internet, Network, and New Media Studies have successively transitioned their focus from online sociability and Big Data to algorithms and algorithmic cultures (Roberge et al., 2019; Roberge and Seyfert, 2018). Today’s artificial neural networks and other machine learning variations all borrow from the above objects of study; yet they do so unevenly and with the unnerving habit of creating flexible—if not
Closely related to the idea of contextualization are the issues of interdisciplinarity and theoretical cross-pollination. Correctly accounting for these requires that we acknowledge how, up to now, significant fields such as Science and Technology Studies (STS) have developed in contrast to, rather than in dialogue with, former paradigms. Bruno Latour’s work, to focus on a preeminent example, has for the most part been construed in opposition to the hermeneutical or critical orientations offered by Paul Ricœur or Jürgen Habermas, for instance. 1 The issue with such quarrels, of course, is that they do no service to actual research. There is an urgent need today to contextualize, i.e., historicize and culturally situate the conduct of STS. Authors and concepts that are seldom assembled can and should be in order to better understand and decipher the influence of AI on society and vice versa. 2
The concept of
AI, as an object of historical inquiry, has always existed in a state of public controversy. Thus, it becomes all the more important to pay attention to the discourses and rhetorics displayed by both its proponents and opponents. What are the tropes, metaphors, and analogies being used—and again, to what outcomes? By raising such questions, it becomes possible to bridge STS with other, more recent theoretical currents that focus on legitimation processes (Boltanski, 1990; Lash, 2007). Translation in many ways coincides with such a concept, in that they both emerge and coalesce
This contribution is organized as a rather substantial first theoretical section, followed by three smaller and more empirical sections, each presenting specific case studies that are emblematic of current developments in the research, economic, and political realms. We thus intend to echo the call made by prominent scholars regarding the importance of undertaking more
Theoretical architecture: Translation, justification, and cybernetics
It might not be possible—nor helpful—to describe all the details and ramifications of Callon’s and Latour’s sociology of translation. At its simplest, translation refers to the act of bridging and linking, of “creating convergences and homologies by relating things that were previously different” (Callon, 1981: 211).In the realm of discourse, translation partakes in meaning-making through the dissemination of common and readily usable languages. In the realm of action, translations allow for the connection and bidirectional influence of actors—be they nonhuman “actant”; human-like, such as a large organization; or “truly” human, as in the case of a leader or a spokesperson embodying a broader assemblage. Here lies the “network” quality of Callon and associates’ STS approach, in that it can account for both the quasi-tectonic displacements of underlying currents as well as the more personalized dynamics that occur. “The fate of innovation,” they say, “rests entirely on the choice of the representatives or spokespersons who will interact, negotiate to give shape to the project” (Akrich et al., 2002b: 217).
The so-called “Godfathers of AI” and latest recipients of the 2018 Turing Award—namely, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio—could be taken as prime examples of what is meant by the notion of translator (Metz, 2019). The idea that they “work together” is especially relevant as they symbolically rely on each other to disseminate their core values and views about connectionist machines, prediction through induction, and so on (see Cardon et al., 2018). Yet, this common endeavor still translates into drastically different styles of attending to the task of problematization, each corresponding to the various degrees with which said researchers intend to engage society on the various issues raised by AI implementation. Of the three, Bengio is, without a doubt, the one most invested in social and political matters. He was instrumental in the mobilization of both political and civil society actors around the Montreal Declaration, and he is an important figure in the movement against so-called “killer robots,” two topics we will address shortly. For now, let us simply emphasize how both of these ventures resulted in significant gains in notoriety as well as a remarkable escalation of Bengio’s role as both broker and problematizing figure in the field of AI.
“To adopt an innovation is to adapt it” should be considered a mantra in the sociology of translation (Akrich et al., 2002b: 209).Central, then, is this process of
Translations are as performative as they are stated for an audience to hear and make sense of. Broader echoes and decipherments are of the utmost importance in that they could allow for what is perhaps a more “culturalist” follow-up to Callon’s theory. “[I]ndeed, it would be hard to picture the formation of technology developments and innovation without some kind of shared, though flexibly interpreted, cluster of guiding visions,” Borup et al. (2006: 289) note. Speculations, desires, expectations, and the like surge in all directions, motivated by powerful and often emotionally charged rhetoric. As a whole, these come to inform broader imaginaries and narratives that are truly mythical in nature (Elish and boyd, 2018). AI currently enjoys a profound as well as multifaceted hype that might be rooted in the sort of ambiguity that comes with an uncertain and contingent future. 4 Hype, ambiguity, and efficiency go hand in hand. “Hype is low on informative content,” Guice writes, “but directly states the relevance of the information to a social context” (1999: 85).
As stated above, debates are a constitutive part of complex phenomena such as the deployment of AI. Success is not defined as the absence of criticism but as its adequate handling. Following Natale and Ballatore, “skepticism and criticism [have] added to AI’s capacity of attracting attention and space […] in the public arena” (2017: 2). What there is, in other words, is a subtle yet forceful dance between the capacity to denunciate and the capacity to justify the technology. From a theoretical point of view, this could be related to Boltanski’s sociology of critical capacities and how, for some, it represents the “symmetrical twin” of Latour and Callon’s theoretical frame (Boltanski, 1990; Boltanski and Thévenot, 2006; Guggenheim and Potthast, 2012).Whereas criticisms come forward in the public discourse questioning any potential flaws, justifications are offered as discourses aimed at appealing to higher principles their proponents hope will command respect—innovation, progress, etc. For Boltanski, criticism and justification are inseparable; as mentioned, they are constructed
Legitimacy through performance, problematizations becoming practical solutions, and criticisms co-opted by justifications all come to inform a brave new world powered by AI that is
To what extent is it then possible to describe such a new, broad, and cybernetic mode of social engineering? The issue is of the utmost importance as discursivities, rationalities, and logics of power and control are all at stake. The end results of such developments are unavoidably political and normative—facial recognition systems being a prime example. Big Tech firms should primarily be understood as optimizing, all-encompassing agents: the social problems and issues they induce by massively implementing AI technologies are envisaged as outputs, noise, and potential signals that might be used to readjust their whole technological display. Focusing at the outset on scalability and emergent effects, they deploy first and monitor, justify, and fine-tune later. As will be shown in the third section, most of their interactions with society at large take the form of trial balloons, i.e., fuzzily delineated experiments intended to gather information on whatever grounds they can cover without encountering too much resistance. Certainly, in a world where success and advancement are based on flexibility and mobility—and where prediction, control, and ambiguity go hand in hand—power lies in informed adaptation.
Translators and the self-regulating horizon of ethical AI
There is now a cottage industry of declarations, frameworks, roadmaps, boards, and the like dedicated to the handling of ethical matters in AI: cities like Montreal and Toronto have given their names to the “Declaration for a Responsible Development of Artificial Intelligence,” and the “Declaration Protecting the rights to equality and non-discrimination in machine learning systems”; Open AI, sponsored by Elon Musk, currently works toward safe Artificial General Intelligence; and Amazon, Apple, Google, Facebook, IBM, and others are promoting the “Partnership on AI to Benefit People and Society.” The question then is how to make sense of these efforts, namely, what is it that they have in common that enables them to occupy such a central stage in this day and age? Following the theoretical considerations laid down in the first section, it certainly is possible to argue that what we are witnessing are translations turning into problematizations and then developing into justifications. Fears are to be addressed, interpretations are to be tempered, and sensitivities are to find a language by which they can be communicated. As Greene et al. note, “building a moral background for ethical design is partly about shaping public perception” (2019: 8). And yet, it is precisely the conceptual and enunciative nature of the language used that also marks the limitation of these ethical discourses—their very structure as “speech acts” being both what shows
A key example of such a “glocalized” and all-encompassing ethical statement is the Montreal Declaration, drafted by a group of researchers mostly from the Université de Montréal and, in reality, the concretization of Bengio’s moral vision. While context matters, it might be important to recall that Quebec is a rather small and distinct society where the state occupies a central role. The Declaration, as said, is part of a larger assemblage that also comprises a state-supported industrial cluster, a district in Montreal where public and private investment is focused, and a nascent Observatory on the societal impact of AI. Institutionalization and personalization not being in contradiction, all these initiatives are Bengio-related and all have benefited—and still benefit—from unflagging state support (Roberge et al., 2019). The Declaration itself has been time and again promoted by Quebec’s Chief Scientist’s Office, while much at the same time the province’s Minister of Economic Development has made clear the strategic importance of the industry: “Artificial intelligence certainly is a priority. I think too often we’ve sprinkled [subsidies] in Quebec and that we do not clearly choose what is important for us. We’re now making a very concrete commitment” (Rettino-Parazelli, 2017; our translation).Thus, we see how scientific, economic, and ethical justifications were articulated to ensure the public’s embrace of this booming new industry.
The Declaration is the result of a two-year process of consultation with diverse actors from public and private sectors. From the outset, three objectives guided its elaboration: “to create an ethical framework for the
Digging deeper in the Declaration—and what it obfuscates—epistemological issues begin to appear. The approach is one said to be of “co-creation” and “co-construction,” an increasingly popular notion nowadays despite the fact that it has yet to find a solid rationale in the literature. To the contrary, such a method allows the work and expertise that social scientists can provide to be short-circuited, when they are not simply admonished and dismissed as suffering from “ivory tower” syndrome. These discussions can still be fruitful, but they run the risk of being less informed by input from historical, legal, social, or economic analyses. Indeed, the circumvention of social science expertise—i.e., those scholars willing to adopt an agonistic posture (Crawford, 2016)—means that a robust checks and balances mechanism is bypassed. For its part, the Montreal Declaration, by design, escapes such a potential threat. While it prefers to build on what
Problems continue to proliferate if one turns from how to achieve consensus to what the actual consensus says. The fact of the matter is that the Montreal Declaration’s value statements and core principles are so broad that they do not even specifically address AI. A sample of such “values” could include for instance: “The development and use of [AI] should increase the welfare of all sentient beings”; “The development and the use [of AI] must contribute to the realization of a just and fair society”; or Principle 6.2—probably the most self-contradictory—“AI development must help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge” (IA Responsible, 2018). The fact that the automation of knowledge production is the sole definition of machine learning, the reason why people invest time and money in the technology in the first place, and the way that firms are able to create such metamorphoses within power and capital distribution is simply ignored. In turn, such a “performative contradiction” creates a semantic network of terms that more often than not are self-referential and free-floating.
As was emphasized earlier, the broad, international, and “meta” nature of the Montreal Declaration also points out that it is not a matter that can realistically be handled in a local political context. The context, in other words, is one that makes the context apparently not very relevant—the (American) Partnership for AI struggles somehow conversely to acknowledge how American it actually is in its views and objectives. Looking at the Quebec government’s efforts, one sees that it has done nothing but promote AI and invest in Montreal’s AI ecosystem. The mantra is
Machine-Learned solutionism and justificatory performance: Zuckerberg’s testimony
In early April 2018, Facebook CEO Mark Zuckerberg appeared in public hearings held by the US Congress in what was shaping up to be one of the defining moments in the company’s history. At the center of the event were allegations regarding Cambridge Analytica’s utilization of over 87 million Facebook users’ personal data during Trump’s 2016 presidential run to create so-called “psychographic” profiles of individual voters (Rosenberg et al., 2018). Most media reported on the tragicomic aspect of the hearings, as Zuckerberg’s uneasiness and definitive lack of bravado were counterbalanced by the quasi-comical technical ignorance revealed by some of the elected officials (Newton, 2018). Yet, a more critical and robust investigation would have noted the way responsibility for the whole debacle was twice deferred. First, elected officials appeared far more inclined to listen to Zuckerberg’s opinions on regulatory matters than to actually undertake a leading role in the talks. And second, the not-too-charismatic CEO himself repeatedly dodged questions regarding accountability and instead promoted machine-learned content moderation systems as
Throughout his two-day-long testimony, Zuckerberg (2018b) put much emphasis on what he clearly envisioned as his favored solution, namely, an ever-increasing use of machine-learning systems deployed to identify and, as much as possible, pre-emptively root out harmful content: “the sheer volume of content on Facebook makes it so that [no amount] of people that we can hire will be enough to review all of the content. We need to rely on and build sophisticated A.I. tools that can help us flag certain content.” If such a massive implementation of automated decision-making systems—admittedly helped, in the first phase, by more than 20,000 human moderators—appears to be the sole scalable answer, many experts argue that both text and image processing still lack the capacity to correctly decipher contexts and meanings (Gillespie, 2017). These technical objections, acknowledged by Zuckerberg (2018b) during his testimony—“Some problems lend themselves more easily to AI solutions than others. So hate speech is one of the hardest, because determining if something is hate speech is very linguistically nuanced, right?”—were nonetheless circumvented through the sheer promissory value of what Elish and boyd have called “the magic of Big Data and AI”: “By calling upon a future that is imminent but always just beyond reach, what technologies can currently do is not as important as what they might do in the future” (2018: 10). One couldn’t better describe Zuckerberg’s (2018a) rhetoric as, for example, when he contended: “I am optimistic that, over a 5 to 10-year period, we will have A.I. tools that can get into the linguistic nuances of different types of content.” Through repetitive and intentionally vague references, and without ever touching on technical specificities, the Facebook CEO thus managed to evade any interrogation or attack directed at him.
But what would it mean for Facebook, YouTube, or any other digital platform to actually develop and implement effective machine learning systems of content moderation? AI-gatekeeping would have to be interpreted as quite the strategic adjustment when considered in terms of what Gillespie (2010) has called platforms’ “sweet spot.” Digital platforms, he explains, have historically attempted to preserve a neutral “honest broker” and “hands-off” posture—relying on the safe harbor legislation that protects legally defined “online intermediaries.” Faced with the magnitude of the events for which digital platforms are now being held responsible—a telling example being the recent use of Facebook by the Myanmar military in their operations against Rohingya populations—such a position might simply not be possible to hold anymore. Departing from their justificatory traditional repertoire, Facebook’s representatives now have to achieve a new balancing act between the liability-free stance they are leaving and the full-on responsibility they are still actively trying to avoid. This repositioning has been carried out through a two-pronged legitimacy claim: to conduct a “large-scale democratic process” aimed at the identification of culture- and country-specific moderation guidelines—“The idea, says Zuckerberg, is to give everyone in the community options for how they would like to set the content policy for themselves”—which are then to be enforced through machine-learned tools (Zuckerberg, 2017).The democratically informed, yet automated and therefore neutral, representational quality of such a process is what affords Facebook the necessary margin for its new, if vague and somehow depoliticized rhetorical posture. AI-gatekeeping, portrayed as the efficient and scalable solution to and impartial arbitrator of what is or isn’t harmful content, thus allows for the depoliticization of the editorializing enterprise and reconciles Facebook with the “online intermediary” status it is aiming for—and the relative partisan immunity that extends from it.
The noticeable emphasis Zuckerberg placed on AI systems of content moderation therefore has to be understood as the proactive component of a larger scheme. This effort, in turn, is intended to focus ongoing discussions about the “supervision” of digital platforms on concerns over ethics and governance—what might be or ought to be done—while avoiding the social issues relating more directly to control, i.e., who holds the power to enforce decisions. Indeed, we saw on a few occasions Zuckerberg’s obvious discomfort when interrogated about his platform’s dominant position. Perhaps the best expression of this uneasiness happened when an elected official inquired: “If I buy a Ford, and it doesn’t work well and I don’t like it, I can buy a Chevy. If I’m upset with Facebook, what’s the equivalent product I can go sign up for? … You don’t feel like you have a monopoly?”—to which the CEO succinctly answered: “It certainly doesn’t feel that way to me” (quoted in Dayen, 2018).Moreover, a leaked photo of Zuckerberg’s preparation notes revealed that if pressed on the issue, he was ready to argue for the importance of strong American digital players to counterbalance the growing influence of Chinese corporations (Cadwalladr, 2018). Thus, the CEO’s hesitancy to engage on this range of issues hints at the way Facebook intends to capitalize on AI solutionism’s discursive tropes—i.e., how it is construed as
As mentioned in the first section I, if legitimacy is constructed and thus performative, it relies as well on how it is received, i.e., believed and assented to. Legitimacy, simply stated, is a symbolic give and take—a
Unlawful territories; or how to cope with killer robots
The Montreal Declaration and Zuckerberg’s hearings demonstrate how actors involved in AI come to deal with the many ambiguities they often bring to the fore, and how they ultimately circumvent the scant regulatory efforts laid out by governing entities. An inquiry into lethal autonomous weapon systems (LAWS) and the way they’re currently debated and problematized reveals an even fuzzier picture where conflicts are both internal and external, and where unsettled problems remain exactly that:
Confronted with such a wide array of questions about the responsibilities incurred through what happens both on the battlefield and in the laboratory, the field of AI has been, and is still currently, undergoing a fairly significant crisis. The normative landscape making up for what should be a legitimate proximity with LAWS-related activities is changing constantly, allowing very few actors to stay outside the debate. It is our contention that such a massive overture to ethical reinterpretations has generated a profusion of problematizing undertakings of various sizes and configurations, all of which intended to redraw the boundaries of moral and legal liability. This effervescence was ultimately induced by a general concern over not being the one actor who didn’t do enough to prevent the bleak outcomes many are expecting.
As just mentioned, such translation-problematizing efforts have taken many forms that are often guided by competing interests. Among the most publicized cases of the sort were the public protests of workers from Google and Microsoft against the contracts these firms had signed with US military entities (Horgan, 2018). In both cases, employees construed the projects they were working on as going too far and crossing all sorts of lines by allowing for enhanced lethal capabilities. While Google admitted that Project Maven was perhaps straying too far from its “Don’t Be Evil” mantra, it still argued that
These two well-publicized cases thus cultivated a narrative of AI workers somewhat opposing collaboration with the military, but such an understanding shouldn’t be generalized to the whole tech industry. As the deputy security of defense responsible for Project Maven said after the project’s demise, “the department was concerned that Google might be the canary in the coal mine, but that’s not what happened” (Work quoted in Simonite, 2019). In fact, collaboration between tech and the military is growing fast, and the latter appears to understand quite well how it needs to modulate its interactions with this new talent pool. When questioned about the efforts deployed by the Air Force to facilitate collaboration with tech startups, an official explained how “flexible” moral costs were a necessary component of their approach: “If you don’t want to work with us on weapons systems but you do want to work on medicine or green energy or data analytics, we should have an open door that fits the needs of the partnership” (Roper quoted in Thompson, 2019). This expresses quite well the general pattern of problematizing activities and boundary-drawing currently unfolding. Many AI specialists appear intent on collaborating with military actors as long as it doesn’t mean working on
While this massive enterprise of justification and boundary-setting was unfolding mostly between tech firms and AI workers, some of these very same actors were taking part in another distinct, if intersecting, field of problematizations. In effect, corporations, their mostly anonymous rank-and-file workers, and the more recognized figures of the field of AI all engaged in what could be described as a cottage industry of declarations, petitions, and open letters stating their principled rejection of LAWS. Simply through their titles, we can observe the escalating severity of these rejections as the years go by, from the Research Priorities for Robust and Beneficial Artificial Intelligence (Russell et al., 2015) to the Call for an International Ban on the Weaponization of Artificial Intelligence (Kerr et al., 2017) and, most recently, the Lethal Autonomous Weapons Pledge (Future of Life Institute, 2018). In most cases, the same names reappear, with Bengio and Stuart Russell being two main figures; but what should also be noticed is the numerous signatures from workers at groups pertaining to Google, Microsoft, and a number of other AI developers who are no stranger to military contracting. These public statements should in turn be associated—and seen as taking part in and integrating with—the more recent and well-known Campaign to Stop Killer Robots (2019). While tech workers and AI scientists were dealing with the ethical challenges of
So what can we expect to result from these translations when all is said and done? Many experts, from both the computer and social sciences, agree on the inevitability of some sort of LAWS eventually becoming an enduring component of military operations (Teffer, 2018)—if they aren’t already. In that regard, what remains to be determined is the degree of autonomy we’ll find in such machines. Even by these standards, fully automated defensive systems, such as those supported by figures like Bengio, could easily facilitate the sort of escalation between actors like the US and China that leads to the grim outcome of fully-automated
Conclusion
Among the very first issues raised in this piece were questions related to the different ways by which AI compelled a renewal of the social sciences. Especially significant were challenges addressed to STS scholars and how they can broaden their—critical—scope while ensuring the pursuit of more precise and empirically informed research. It is our contention that what is at stake in the foregoing is the very possibility of Critical AI Studies (CAIS). What would this entail? Specifically, would such an intellectual endeavor be up to the many challenges posed by the fast-paced and highly scalable deployment of AI technologies in increasing aspects of social life? Without a doubt, AI implementation is a total social phenomenon
From the preceding, and recognizing now that self-doubt might very well be constitutive of CAIS, some affirmative conclusions can nonetheless be offered. One deals with the theoretical quarrels of the past that were dismissed for not being helpful in and of themselves—nor reflecting the various contingent realities of AI deployment. Rather, we have tried to emphasize the continuum and fruitful discussions that exist between, for instance, the ideas of translation, spokespeople, problematization, justification, legitimacy-building, and adaptability as a type of rational as well as pragmatic power dynamic. These concepts convey much meaning as they resonate together with the social, cultural, economic, and political evolution of AI technologies—something that indeed informs this other conclusion regarding the importance of
The first of these assumptions is twofold: AI translators are fundamentally unequal, and the current state of problematization is best characterized as its own solutionism. Of course, it would be difficult to compare the charisma of figures like Bengio or Zuckerberg, yet entrepreneurs. Both of them are at their best when speaking about the
Another key assumption throughout this piece relates to the fundamental discrepancy between criticism of AI and its justification, namely, how the latter never ceases to benefit from the many weaknesses of the former. The three case studies prove deeply coherent here, notwithstanding the obvious differences of context. Dissident voices taking aim at AI deployment in Quebec are scattered and still unable to compete with the “official” discourse of star translators, and most of, if not all, the main institutions in the province. US officials tasked with pressing Facebook on its many shortcomings ended up letting Zuckerberg position his platform as a major actor in any regulatory enterprise that might emerge. Even in the debates surrounding LAWS, efforts to conceptualize risks, threats, and dangers have remained vague—but of a nonperformative and inefficient kind this time—and easily dismissed by the
Lastly, we have demonstrated how the current deployment of AI technologies is indicative of an enhanced form of cybernetics where control joins communication
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
