Abstract
It has been widely observed in recent years that there has been a global crisis of institutional trust. This article explores the concept of mediated trust, using the “Three I's” framework of ideas, interests, and institutions, in order to understand how the interaction between these three elements provides insights into the evolution of mediated trust. It is argued that, in the case of the early Internet, ideas were a key driver of institutional formations. By contrast, with artificial intelligence, there are already an array of powerful corporate interests seeking to drive policy agendas.
The global crisis of institutional trust
It has been widely observed in recent years that there has been a global crisis of institutional trust. 1 As United Nations Secretary-General, Antonio Guterres, has identified, “a malady of mistrust has been damaging to global societies as COVID and climate change” (Guterres, 2021). As he puts it, “the surge of mistrust and misinformation is polarizing people and paralyzing societies,” and “a breakdown in trust is leading to a breakdown in values.” There is evidence worldwide pervasive mistrust of political media and business institutions (Daley, 2021; Edelman, 2020).
Declining trust in social and political institutions has been associated with populism and political polarization (Norris and Ingelhart, 2019), public health misinformation (OECD, 2020), and citizen disengagement with public institutions. Among the societal costs of rising distrust can include more risk-averse governments, citizen noncompliance with public policies, declining economic innovation, fraying, social cohesion, and reduced legitimacy for political institutions (Kettl, 2017).
In this short article I want to argue that we need a focus upon questions of mediated trust. Recognizing that a feature of modern societies is that our interpersonal, institutional and societal interactions are mediated at every level by communications technologies, we need to consider the relationship between the digital platforms, devices and modes of interaction that are a part of our everyday lives, and whether they engender or inhibit greater degrees of trust. This becomes increasingly important in an age of artificial intelligence (AI), as it promotes not only human–machine communication, but machine–machine communication, and forms of automated decision making which have the capacity to pre-empt conscious human and institutional choices.
Trust as an “invisible institution”
We need to ask the question as to what is “trust”? There's an element with the trust of knowing it when you see it. The sociologist Niklas Luhmann observes trust is a basic fact of social life, and that a complete absence of trust in others would prevent you from getting out of bed of a morning (Luhmann, 2017). Trust is a condition of cooperation, and a commitment to cooperation is an essential element of societies whose economies are based upon a complex division of labor, as Adam Smith observed in The Wealth of Nations over three centuries ago (Tullock, 1985).
Trust has been said to constitute an invisible institution, that is central to economic development, social inclusion, civic engagement, governance, the effective operation of public institutions, and international cooperation (Hardin, 2006; O'Sullivan, 2019; Uslander, 2002; Williamson, 1993). But its embedded and often invisible nature making its significance difficult to define and measure. And as Blöbaum has observed, “trust exists, so it is something tangible. But at the same time trust remains abstract” (Blobaum, 2021: 3).
Communication and trust
Trust studies have emerged as an interdisciplinary field, primarily informed by disciplines that include philosophy, history, sociology, economics, and political science (Barbalet, 2019; Evans et al., 2019; Hosking, 2014; Mollering, 2006; O’Neill, 2010). I would argue that communications as a discipline has a great deal to offer the study of the craft of trust (Flew and Jiang, 2021). We know that communication occurs at three levels: the interpersonal, the institutional, and the societal. It also occurs at three scales: the local, the national, and the global.
If one can identify concepts that can inform a general understanding of trust from a communications perspective, they include: Jürgen Habermas's concepts of the public sphere and communicative action; theories of media influence; debates about mediated populism; the literature of media and post-truth; and moral panics. At the same time, the concept has often been implicit rather than explicit in communication studies, as it has not featured in anything like the same ways as debates about truth, or systematically distorted constructions of truth associated with theories of ideology (Flew, 2021b). Debates about the media and ideology are also debates about trust and credibility in media or, in what I am referring to for these purposes, as mediated trust (Bodo, 2021; Schafer, 2016; Stromback et al., 2020).
The “Three I's” of mediated trust
In my 2021 book Regulating Platforms (Flew, 2021a), I developed a synoptic history of the Internet from its early forms through to today's platformized Internet using the concept of the Three I's. The “Three I's” referred to ideas, interests, and institutions, and the interaction between these three elements provides insights into the evolution of mediated trust.
What do I mean by the “Three I's”?
First, there are ideas, which are ways of thinking about material objects and relationships that rise to prominence in general social acceptance. At the same time, these ideas are often challenged. Therefore, there is early work on the Internet that refers to this idea of the “Californian Ideology” that developed in Silicon Valley, based around the notion of what Barbrook and Cameron referred to as “free minds and free markets” (Barbrook and Cameron, 1996).
The second “I” is that of interests, or the institutions, and organizations that seek to advance their individual or collective social power in the economic, political, and cultural spheres. Of particular importance here are economic interests, such as large corporations. So, in many respects, this refers to the political economy of digital communication.
Finally, there are institutions, which are the organizational forms that both govern and regulate social, economic, and political relations, and through which collective decision making occurs. Institutions can be both nationally and globally based, and can include both formal institutions, such as corporations, governmental and international agencies, etc., but also informal institutions, or the customs, values, ideologies, and beliefs that underpin different institutional cultures (Denzau and North, 1994).
The shifting balance over time between these three elements serves to constitute mediated trust relations, but also shapes futures. So, we know how the intersection between these elements shape future sociotechnical practices, and institutions in a path-dependent manner. This approach has been influenced by the new institutionalism associated with authors such as the political scientists March and Olsen (1984), and the economists Oliver Williamson, Douglas North, and Elinor Ostrom (North, 1990, 1994; Ostrom, 1990, 2010; Williamson, 1985, 2000), among others.
The “Three I's” and the internet
If we take the example of the early Internet, I argued in Regulating Platforms that of the Three I's, the most significant influence on the Internet's development was that of ideas. There were a set of ideas about a global interconnected communications network, and its potential to offer decentralized decision making, economic innovation. and a more empowered citizenry, that had been circulating for decades, particularly in the United States. These ideas came to both popular and political prominence in the 1990s, as this new technology came to be taken up by billions of users around the world.
As a result, the idea of the open Internet captured the popular imaginary, and shaped the policies and institutional frameworks that would dominate global Internet governance as development of the World Wide Web proceeded apace. What we find, then, is that dominant ideas about the Internet were well established before, there are major companies that specialized in the Internet. For example, Google was founded in 1998, by which time there were over 350 million Internet users worldwide, and Facebook (now Meta) was incorporated in 2005, by which time over two billion people worldwide were on the Internet.
This means that the corporate interests who would subsequently shape Internet policy gradually emerged after the technology had been largely popularized. By this time, ideas about the Internet as what MIT communications scholar Ithiel de Sola Pool referred to as early as 1983 as technologies of freedom, primarily shaped Internet institutions and policies from an early stage (Pool, 1983). The idea of minimal liability for platforms with regard to the content hosted on them, or what are known as “safe harbor” provisions, the idea that platform companies are not media companies, and the notion of multistakeholder governance enshrined in the ICANN, the Internet Corporation for Assigned Names and Numbers, were all enshrined in law by the second half of the 1990s. The global information infrastructure had been largely put in place before there was any significant degree of contestation around the rules of the game of the global Internet.
AI and the myth of AI sentience
The year 2022 was the year when public conversations about artificial intelligence (AI) really took off. The realization that AI could produce art works, write student essays, and perform a range of creative tasks took it out of the realm of being seen largely as a technology designed to automate low-skilled jobs. 2
What we call AI is the combination of four elements: computational capacity, data analytics, machine learning, and human interaction. I have deliberately not included the word “algorithm” in here because algorithms are already incorporated into computational capacity, data analytics, and machine learning (Crawford, 2021). What we are talking about are adaptive technologies that work with data. But more importantly, at all stages, there is a human element involved in all stages of AI.
Using the “Three I's” framework, we need to be thinking simultaneously about the dominant ideas concerning AI, the interests that are promoting AI, and those institutions that are, or may be, engaged in the governance of AI.
One of the ideas about AI has a long history of ideas about robots related to Frankenstein's monster, about the human being absorbed into the machine. These have a long history in popular culture. An interesting recent reprise of this concept has been the idea of AI sentience. Google engineer Blake Lemoine claimed in 2022 that LaMDA, the language model for dialog applications used by Google, had acquired a form of human consciousness, or what's referred to as sentience. Sentience is the capacity to experience feelings and sensations.
I found myself wondering where I have come across this notion before. And sure enough, one of the places where I have come across it before is the 1982 movie Blade Runner, where the replicants have returned from the space colonies, led by actor Rutger Hauer. In the famous final scenes of the movie, Hauer's character Roy talks about what he has seen and is in effect becoming human, but at the point of acquiring a conscience he dies. This is a very pervasive myth about AI, that it is heading in the direction of not just human intelligence but also human conscience, but it is a misleading one.
Rather than thinking about AI as being about machines becoming human, we should be thinking about the human role in AI and the interests and institutions that are promoting it. AI is, in many respects, a natural extension of platformization, and processes of datafication that we have been experimenting with in various forms with AI since the 1950s. But it has been the combination of the gathering of large amounts of data from multiple sources, big data, and the algorithmic sorting of that data through platforms that have developed the adaptive machine learning capabilities that have seen the most rapid recent advances in AI.
Trust issues will arise at all stages of the implementation process with AI. Molina and Sundar have observed that the extent to which AI is trusted in content moderation on digital platforms can inversely related to the trust that people have in one another, and in social institutions (Molina and Sundar, 2022). In other words, whether people trust AI technologies is correlated with the attitudes toward trust that they bring to such considerations.
I note here that the bulk of the world's AI patents is being lodged in one of two countries, the United States and China. What we find is that AI governance is bound up with many of the most powerful companies in the world and their business interests, but also bound up with geopolitical issues. This is quite different, as I’ve suggested, from the global Internet that emerged in the 1990s where, in many respects, the idea of the Internet preceded the companies who would play a major role in it. We are also very much aware of the possibilities of biases in AI, including racial and sexual biases around issues such as facial recognition software.
Finally, considering the institutions of AI, we have AI Codes of Ethics being developed. There is quite a boom in codes of ethics at the moment. But one of the questions that always arise with codes of ethics is whether the tech industry be trusted to regulate its standards? And if the answer to that is “no,” then who should regulate? So different models have emerged, including the Beijing Consensus on Artificial Intelligence and Education, which pointed to the importance of human-centered design. UNESCO has also done considerable work on global codes of ethics. And there is also the idea of governing through regulations and standards, or what's referred to as algorithmic impact assessments.
In short, in terms of trust, what we are going to find is that the institutions that are going to be around the governance of AI and which facilitate trust in it may well find themselves in conflict with some of the dominant interests in AI. We are also going to find that ideas around the role played by AI and its relationship to human thriving and human wellbeing are going to be increasingly important questions.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author is supported by the Australian Research Council through an ARC Laureate Fellowship FL230100075 “Mediated Trust: ideas, Interests, Institutions, Futures”.
