Abstract
Big Tech whistleblowing is a distinct category that should prompt new theorizing and analysis within the fields of whistleblowing and organization research. I build this idea on two key characteristics of the Big Tech industry: (a) the opacity and intangibility that surrounds its digital technologies and (b) the deep, profit-driven impact it has on democracy and communicative infrastructure. These conditions identify a number of important differences between Big Tech and other “bigs” such as Big Oil, Big Pharma, Big Tobacco, etc. They also suggest, I argue, that whistleblowing within this industry comes with a set of rather specific responsibilities, prospects, and challenges. In order to delineate these, I combine a sociological understanding of the democratic role of whistleblowing with an analytical reading of Frances Haugen’s autobiography, The Power of One, written after she blew the whistle on Facebook’s algorithms in 2021. The analysis leads to the identification of a paradox of visibility. Seen from a democratic point of view, we are increasingly dependent on Big Tech whistleblowers to let us in on the inner workings of Big Tech. We have not been let down. Recent years have seen a steady trickle of Big Tech exposures. At the same time, whistleblower opportunities to make such an impact are circumscribed by the properties and powers of the Big Tech industry.
Introduction
Organizations “know” things that “we” (who are not part of them) do not. In some cases, we are okay with that and accept that organizations can legitimately have what Goffman (1959) called strategic secrets. For example, no one would expect an army to lay out its plans before a battle, just as we understand a company’s need to have trade secrets to secure its competitive position (Florini, 2007). Yet, as Schudson (2015) has discussed, the room for legitimate secrecy and discretion in organizations has shrunk since the 1960s and 1970s, where a cultural shift toward the “right to know” radically altered the playing field, requiring organizations to become more open and transparent. This movement was driven by an emerging insight that organizations do not just have strategic secrets, but also, at least potentially, if we return to Goffman (1959), dark secrets – practices that, if they were placed in the public spotlight, would be viewed as illegal or immoral (or both; Alexander, 2018).
Against this backdrop, it is easy to understand why whistleblowing came into its own in the critical zeitgeist of the 1960s–70s (Melley, 2020). Nader (1972), lawyer, activist, and one of the earliest proponents of whistleblowing, based his defense of whistleblowing precisely on a scathing assessment of organizations: “As organizations penetrate deeper and deeper into the lives of people,” he wrote, the rights of citizens and consumers “are adversely affected” (1972: 7). Because citizens and lawmakers necessarily have limited access to the inner workings of organizations, democracies have, since Nader’s writings, placed their faith in “insiders” to do the dirty work of disclosing wrongdoing for us (Vandekerckhove, 2006).
While this analysis of the relationship between organization, whistleblowing, and society still stands, in my view, it is equally evident that today’s world of production is very different from the largely industrial mode that dominated in Europe and the United States up until the 1970s and 1980s (Castells, 1996). This is especially noticeable in the Big Tech sector (Alphabet, Microsoft, Meta, etc.), where significant parts of production revolve around digitalization, datafication, and algorithmic ordering (Leonardi and Treem, 2020). But should all this matter for whistleblowing research? Is wrongdoing not wrongdoing, and whistleblowing not whistleblowing, no matter when or where it happens? I think it does matter, and I say it because the Big Tech industry in my view constitutes a significant change from past forms of production (Moore and Tambini, 2018). This is particularly notable in the central role that algorithms play in its operations. Algorithms are democratically consequential because they combine two characteristics: (a) their digital nature surrounds them with significant opacity and inaccessibility (Burrell, 2016; Gillespie, 2017; Kitchin, 2017; Pasquale, 2015) and yet they have (b) deep and ultimately profit-driven impacts on sociality and communicative infrastructure (Alaimo and Kallinikos, 2019; Amoore, 2020; Couldry and Mejias, 2019; Flyverbom, 2022; Fuchs, 2017; Zuboff, 2022). Since whistleblowers are per definition tied to organizations, as members or employees (Near and Miceli, 1985), it follows that major changes in the nature of production and technology must elicit specialized theoretical reflection on the conditions of whistleblowing within these areas.
The whistleblowing literature has been somewhat slow to catch up with this reality. There is an emerging body of scholarship around whistleblowing and digitalization (Kenny, 2023; Munro, 2017; Munro and Kenny, 2023; Olesen, 2022; Weiskopf, 2023; Weiskopf et al., 2019). While this has offered pertinent analyses of digital surveillance (Edward Snowden and the NSA), tax havens (the Panama Papers), the misuse of personal digital data (Christopher Wylie and Cambridge Analytica), etc., it contains surprisingly few theoretical reflections on the specific relationship between Big Tech and whistleblowing. There is more guidance if we look beyond the usual circuit of whistleblowing research outlets. In First Monday, Di Salvo (2022: 1) makes a thoughtful case for thinking about Big Tech whistleblowing as “a unicum” that is “expanding the scope and role of whistleblowing and leaks in contemporary society.” United States law scholars Wu (2024) and Bloch-Wehba (2024) also charge head-on, speaking about algorithmic whistleblowing (Wu, 2024) and tech whistleblowing (Bloch-Wehba, 2024) as a novel, on-the-rise phenomenon that requires attention because “the ascent of the information economy has brought with it major challenges for democratic governance and public knowledge” (Bloch-Wehba, 2024: 1545) and because it exposes “the broad assertions of secrecy so common in the technology industry” (p. 1562). The numbers are on their side: for example, Wu (2024: 16) cites an IPSOS Mori poll of over 1000 tech workers in which a staggering 59% of computer engineers reported that they had experienced situations of potentially harmful technology.
However, despite the important, agenda-setting effect of these works, they are also limited, at least as seen from within this article’s ambitions, by the lack of a deep engagement with whistleblowing theory. It is my aim here to try and advance such a theoretical discussion of Big Tech whistleblowing. I give the discussion empirical shape through an analysis of Haugen’s (2023) recent autobiography, The Power of One, in which she explains her decision to blow the whistle on Facebook’s algorithms in 2021. Haugen’s disclosures first hit the public screen via an article series in the Wall Street Journal, The Facebook Files. While there is already a vibrant, intellectual discussion of the power of algorithms (e.g. Doctorow, 2021; Vaidhyanathan, 2018; Zuboff, 2022), Haugen’s case deserves special attention from organization and whistleblowing researchers, not only because she is one of the most powerful critical voices to emerge from within Big Tech, but also because we only rarely get the kind of detailed insider information that an autobiography provides.
All in all, this leads me to a somewhat double-edged conclusion: On the one hand, we are more and more dependent on whistleblowers to let us in on the secretive, opaque, and democratically consequential operations of the Big Tech industry. We have not been let down. Frances Haugen is only one among several Big Tech employees who have come forward with their concerns in recent years (Bhuiyan, 2021; Bloch-Wehba, 2024; Di Salvo, 2022). These exposures have set in motion significant reform drives aimed at regulating Big Tech. On the other hand, there are also reasons to be skeptical of a kind of liberal optimism where the world becomes a better place disclosure by disclosure (Kenny, 2023: 65). Rather, I suggest that the situation surrounding Big Tech whistleblowing is characterized by a paradox of visibility. It might seem that we live in times of heightened visibility, and in many ways we do. Yet, as Zyglidopoulos and Fleming (2011) noted in these pages, well before the Big Tech label became en vogue, this visibility is circumscribed, not only by the defensive posture of organizations, but also by the opacity that defines the complex, expert-driven mode of production in late-modernity. The Big Tech industry is making this point even more salient.
The paper is divided into two main sections. The first offers a sociological framework that places the whistleblower in a broader history of democracy and suspicion. In the second, I draw on this framework to analyze the case of Frances Haugen and the Facebook files.
Whistleblowing and democracy
It may seem counter-intuitive to start with a historical-sociological account when the goal of the article is to say something about the cutting-edge nature of Big Tech whistleblowing. I do think, however, that any argument about newness must be set in a historical context. I outline two themes that I find particularly important to provide such direction: that (a) whistleblowers are related to a democratic ethos of disclosure with deep historical roots and (b) tied to the idea that organizations are inherently suspicious. Both themes contain important ideas for exploring the theoretical novelty of Big Tech whistleblowing.
The politics of disclosure
Democratic history is often told through the usual suspects of elections, parliaments, and parties. This account is not inaccurate, but it tends to overlook how it is also a history of suspicion and disclosure (Keane, 2018). Habermas (1962/1989) does not quite tell his seminal story about the bourgeois public sphere in this way, but essentially, his book describes a drawn-out seismic shift through the 18th century, where it became increasingly legitimate for citizens to criticize those with power. Zaret (1996) traces back even further, to the 17th century, where he identifies a growing practice in England of petitioning that challenged the “norms of secrecy and privilege” (p. 1540) that had surrounded elites for centuries. Jeremy Bentham is perhaps best known, via Foucault’s (1977) work on discipline, as a theorist of panoptic surveillance (Cutler, 1999: 323). However, he was also one of the first thinkers to formulate a democratic theory of publicity (Habermas, 1962/1989: 99). In his essay, Of Publicity, Bentham (1838–1843: 315) drew a stark contrast between publicity and secrecy, referring to the latter as “an instrument of conspiracy,” claiming that “it ought not, therefore, to be the system of a regular government.” Bentham’s vision was one of systematic mistrust: “Whom ought we to distrust,” he asked rhetorically, “if not those to whom is committed great authority, with great temptations to abuse it?” (p. 314). His solution to these problems was, as the essay’s title suggests, openness, scrutiny, and transparency.
In the first instance, Bentham’s concerns were aimed at government and state. It was not until Karl Marx and his followers that the world of private business began to come under the same kind of scrutiny. Where Bentham had a liberal, almost engineering belief that power would correct itself under public pressure, Marx and Engels had no such illusions, notoriously asserting in the Communist Manifesto that real change cannot be had “without the whole superincumbent strata of official society being sprung into the air” (Engels and Marx, 1848). However, the work of Marx and Engels also kicked the door open for more reformist critiques of business. The muckraking tradition in American journalism in the second part of the 19th century had many faces, but much of its social indignation was directed toward unsafe work conditions, child labor, and sweatshops (Emery and Emery, 1978). Its critical reporting exposed how private companies’ drive for profit caused immense suffering for employees. During the 20th century, and especially the second half of it, social movements, NGOs, and citizens calling for political consumerism, boycotts, and corporate social responsibility accelerated these early criticisms of business and capitalism (Carroll, 2008).
In many ways, these long waves culminated in the 1960s and 1970s when all kinds of authorities came under unprecedented scrutiny and critique (Inglehart, 1977; Keane, 2018; Rosanvallon, 2008; Schudson, 2015). Social movements and journalists, as I have already suggested, had been key actors in getting this wave rolling. However, this family of democratic disclosers was radically extended with the coining of whistleblowing as a new form of action in the 1960s and 1970s (Melley, 2020). I call this change radical because this was the first time in democratic history that organization “insiders” (i.e. employees) were systematically called upon to disclose it when bad things happened inside their workplaces. 1
Essentially, this legitimated a kind of loyalty betrayal, but one that was ultimately tied to a democratic purpose. Early proponents of whistleblowing such as Nader (1972: 5) understood well the fundamental loyalty dilemma this involved for employees, and asked: “at what point should an employee resolve that allegiance to society. . .must supersede allegiance to the organization’s policies (e.g. the corporate profit), and then act on that resolve by informing outsiders or legal authorities?” The answer is rarely straightforward for employees, but Nader’s response, and that of many who followed in his footsteps, has been to pin it firmly to the public interest. In simple terms: we don’t care if your boss, unfairly, did not promote you, but we would like to know if he instructs you to let chemicals into our rivers!
Nader’s formulation of a tension between loyalty to society and to the organization invokes the existence of what Alexander (2006, 2018) calls a “civil sphere.” The civil sphere, Alexander (2018: 1050) says, “is organized around a discourse that sacralizes the motives, relations, and institutions necessary to sustain democratic forms of self-regulation and social solidarity.” In this sense, it has a universal claim “that can challenge the particularistic discourses and institutional demands of separate spheres . . . the civil sphere’s communicative and regulative institutions have the power to project this moral language beyond the boundaries of separate spheres and powerfully reconstruct them” (p. 1070). Even if Alexander’s theory is not a theory of organization, it is not difficult to think of his “separate spheres” as organizations and institutions. The connection to whistleblowing is equally obvious. The civil sphere gets to work, so to speak, when “private” information escapes into the public arena. When this happens, he says, a process of “societalization” (Alexander, 2018) gets underway where particularistic practices are subjected to the universal moral and legal standards of the civil sphere. For Alexander, the main protagonists in the dynamic are journalists (Alexander, 2015, 2018). Yet, as I discussed earlier, whistleblowers are, in my view, part of the same democratic ethos of disclosure and visibility and, therefore, consistent with his overall theoretical framework. Whistleblowers generally act on a pro-social motivation (Dozier and Miceli, 1985; Weiskopf and Wilmott, 2013): they initiate processes of societalization because they have experienced organizational practices that they believe constitute a violation of the law or of socially anchored norms about fairness and justice.
What sets whistleblowers apart from other actors in the democratic ethos of disclosure is their privileged insider and expert position. In contrast, journalists, NGOs, lawmakers, and authorities inevitably approach wrongdoing in organizations from the outside (Olesen, 2022). In the opening remarks of the recent EU Directive (EU, 2019) on whistleblower protection, employees are thus considered to “play a key role in . . . safeguarding the welfare of society” because they “are often the first to know about threats or harm to the public interest . . ..” Being the first to know gives employees a relatively unique position among the various actors who can effect change in democratic societies. The whistleblower’s privileged position has at least two dimensions. The first has to do with proximity and the fact that employees observe wrongdoing from the front row, so to speak. The second concerns expertise. Employees are professional experts, which allows them to detect wrongdoing in a more accurate, detailed, and credible manner than observers who do not possess the same kind of expert knowledge.
In the discussion of the Frances Haugen case below, I consider how these historical-democratic notions of insider expertise and being “the first to know” need to be rethought in the light of Big Tech whistleblowing. In particular, I argue that the digital, intangible character of algorithms create a range of new accessibility and detectability challenges.
Suspicious organizations
Parker (2016) has provocatively argued that all organizations, even the most legitimate, visible, and open ones, share important traits with secret organizations. Because every organization is defined by its boundaries it is to some extent “unobservable” and, therefore, at least theoretically, prone to secrecy: “Secretus,” Parker (2016: 110) argues, “begins with separation, with a setting apart that marks different social spaces. Such boundaries are constitutive of organizing, and of organizations.” “In this way”, he goes on, “it becomes possible to assert that the boundary between ‘inside’ and ‘outside’ is an epistemological one too, a filter which prevents certain information from becoming visible.” There are several opportunities to connect Parker’s argument with the discussion in the previous section. Most notably perhaps, organizations are surrounded by a kind of permanent suspicion: wherever there are boundaries, there is also uncertainty about what happens on the other, partly unseen, side of that boundary. As a result, says Parker (2016: 102), popular culture is full of plots where wrongdoing takes place in a shady organization, and where the hero of the story works hard to disclose it so that justice can be restored. Even if Parker does not explicitly speak about whistleblowers, it is evident that they fit into his overall picture. As shown by Olesen (2021), for example, whistleblower movies are a busy cinematic genre where whistleblowers are mostly portrayed as heroic Davids battling it out with malign organizational Goliaths who will do everything to hide their dirty secrets. In our democratic cultures, we are deeply fascinated by these organization insiders who risk everything at a personal level to speak up (Melley, 2020), just as we realize that if they had not, we would probably never have known the “truth” and kept on living in a less just world.
Organizations are key elements in the history of disclosure and suspicion. This does not only have to do with their boundaries, as Parker (2016) suggests, but also with the fact that organizations function according to logics that, at least in some respects, are different from those that guide action on their outside, in “society.” Luhmann (1995) is perhaps the preeminent theorist of what these different logics look like and what their implications might be. To Luhmann (1995), society consists of various systems that function according to their own logics. In the economic system, the dominant logic revolves around profit; in the educational system, around learning; in the judicial system, around legality, and so on. Organizations are important units in Luhmann’s theory (Seidl, 2005). It is here that the system’s logics are practiced and executed. In that sense, organizations are “decision machines” (Nassehi, 2005), where actions are made according to their ability to fulfill the organization’s (and in a wider sense, the system’s) goals and purposes (Andersen, 2006).
The enduring legacy of Luhmann for organization studies is not so much this view of differentiation, which is standard fare in organizational sociology, but the way it offers a theoretically grounded answer to why wrongdoing in organizations may develop. While Luhmann’s notion of “structural couplings” (1995) indicates that systems and organizations are never fully closed or isolated entities, the fundamental point remains that they understand the world and make their decisions on the basis of their own codes and logics. On the one hand, the very idea of an organization presumes some degree of autonomy, that is, an ability to decide on and about itself according to its own codes (Luhmann, 1995). On the other hand, autonomy also implies what Seidl (2005: 45), drawing on Luhmann, calls unavailability: “As the organization can only operate on its ‘inside’ and cannot distance itself from itself . . . it is captive of its own processes and thus does not have (complete) control over itself.”
This self-closure, as it were, is obviously not an automatic producer of wrongdoing, but it does indicate that every organization has an in-built potential to lose control over itself, as Seidl suggests. This happens when dominant codes are followed into the extreme – that is, when the wrongdoing, as Pohlmann (2020: 15) notes, applying Luhmann’s (1964) concept of “useful illegality” (“brauchbare illegalität”), “is conceivable as a useful action in light of the organization’s purpose.” 2 For example, if we consider the tax evasion practices disclosed by whistleblowers in what is now known as the Panama and Paradise Papers, these practices were “useful” because they helped companies increase their profits. Similarly, the mass surveillance carried out by the NSA (and disclosed by Edward Snowden) made good sense from within that organization’s own logics. An organization such as the NSA revolves around the code security-insecurity, and seen through that lens, the more surveillance the better.
That organizations are inherently suspicious in the sense discussed above is well-documented in the literature on normalized wrongdoing (Ashforth and Anand, 2003). Study after study within this tradition has found that organizational wrongdoing is not an unfortunate aberration, but a “predictable and recurring product of all socially organized systems” (Vaughan, 1996: 274; see also Palmer, 2013; Pohlman et al., 2020). What researchers consistently identify is how wrongdoing often achieves a level of normalization where it has become broadly accepted and internalized among managers and employees (Ashforth and Anand, 2003). Rather than an organization of corrupt individuals (where wrongdoing is typically more isolated and localized), the notion of normalization implies that the organization is itself corrupt (Pinto et al., 2008). The exact motives that drive processes of wrongdoing obviously varies from one organization to the next (Leys and Vandekerckhove, 2014). Yet, it is a consistent finding that the aggressive pursuit of profit is one of its key causes (Balch and Armstrong, 2010; Kvalnes and Nordal, 2019; Rhodes, 2016).
In the discussion of the Frances Haugen case, I continue this point by considering how social media platforms increasingly shape our public spheres. In particular, I argue that the profit-oriented design of Facebook’s algorithms creates a range of democratically consequential tensions and dysfunctions in societies’ communicative infrastructures.
Frances Haugen and the democratic implications of Facebook’s algorithms
I begin the analysis of Frances Haugen and Facebook by reflecting on autobiographical data and method. I then provide a brief summary of Haugen’s experience at Facebook as she recounts it in her 2023 autobiography. Finally, I extract two key themes from the autobiography for further discussion: (a) profitable engagement and (b) digital opacity.
Autobiographical data and method
The analysis of Frances Haugen and the Facebook Files is a based on a deep reading of her more than 300 pages long autobiography, The Power of One: Blowing the Whistle on Facebook (Haugen, 2023). Autobiographical data is a unique data source in organization research because of its detail and richness (Mathias and Smith, 2016). Relying in this way on Haugen’s self-account makes the analysis obviously subjective. This is intentional. As I noted earlier, whistleblowers are insider experts. In fact, this designation is what sets them apart from other actors within the ethos of disclosure and marks their unique contribution to democracy. If we accept this role and its importance in our democracies, it follows, in my view, that we must take their subjective experiences seriously as a starting point for analysis.
However, it does not mean we should take it at face value, just as we should always remain open to the possibility that their accounts can be driven by ulterior motives and contain manipulative, self-aggrandizing elements (Mathias and Smith, 2016). Whistleblowers themselves are well aware that they may be (and very often are) accused in exactly this way. To guard against this type of critique, many choose to work with journalists. This kind of collaboration provides their stories with legitimacy because they are filtered through journalistic standards of verifiability, accuracy, and factuality (Olesen, 2023). Haugen blew the whistle on Facebook in a collaboration with the Wall Street Journal (see below for details), which published a series of articles based on her experience and documentation in the fall of 2021. In addition, she has worked closely with several professional bodies such as parliamentary committees to provide testimony and evidence. Engagements such as these necessarily involve thorough vetting of the credibility of the whistleblower and her evidence.
In their review and defense of autobiographical research in organization studies, Mathias and Smith (2016) suggest using autobiographies in a triangulated set-up where autobiographies are held up against other sources, including critical commentaries on their authors. The fact and credibility check at various professional layers that Haugen’s story has been subjected to in my view largely accommodates Mathias and Smith’s concerns about triangulation. Obviously, this does not imply that we should see her account as the “truth.” For example, shortly after her disclosures, Facebook CEO Zuckerberg (2021) offered a response that refuted many of Haugen’s claims. However, I do consider her book to be sufficiently trustworthy and evidence-based to merit that we take it seriously as a unique and valuable data source for understanding the inner workings of Facebook’ algorithms. There is also an ethical stance at play here, as I indicated above. If we prize the efforts of whistleblowers to alert us about organizational wrongdoing based on their insider knowledge and expertise, we also have a duty to try to see things with them, as they experienced it from their particular position in the world (Kenny, 2019) – at least as a starting point and until we receive new information that tells us not to. The autobiography as a genre perhaps offers the most powerful and detailed way we have as researchers to gain access to this domain.
In the following, I extract two main themes from Haugen’s book: (a) profitable engagement and (b) digital opacity. These are obviously not the only themes to be found in Haugen’s detailed account, and I therefore do not pretend to provide a balanced presentation of the book. The two themes were identified in a deductive manner focused around a set of theoretically informed questions: What kind of wrongdoing took place in Facebook according to Haugen (65); what was the primary driver behind it (32); what were its consequences (28); and how did Haugen see her role as a whistleblower in relation to these practices and conditions (19)? This led to the identification of a total of 112 text excerpts where these issues were discussed in some detail. The parentheses above indicate the number of in-text occurrences within each question frame (the total number adds up to more than 112 because some excerpts refer to two or more questions). Obviously, this text corpus contains much more complexity and nuance than I can report here. However, I do find that the two themes that follow condense a significant part of its essence. To set the scene for the presentation of the two themes, I first introduce a brief summary of Haugen’s account and background.
Background and summary
Frances Haugen joined Facebook on 10 June 2019 and left the company in May 2021. Before joining Facebook, she had worked at Google, Yelp, and Pinterest in programing and software engineering. She holds degrees in computer engineering and business administration from Olin College and Harvard University. When approached by Facebook recruiters in 2019, she specifically requested to be given managing roles related to the company’s work on misinformation and integrity (Haugen, 2023: 157). However, she quickly came to the realization that the company, in her view, was not paying sufficient attention to the negative effects of its algorithms and that its efforts were lacking in staff, resources, and management prioritization. The main problem resulted from changes made to Facebook’s algorithms in 2017–2018. These were implemented “in response to a slow but troubling decrease in the amount of content being produced on the platform” (Haugen, 2023: 11). Facebook concluded “that the only intervention that increased the amount of content produced was giving creators more small social rewards. In other words, the more people who like, comment on, and reshare your content, the more likely you are to produce more content for Facebook” (Haugen, 2023: 11).
As she increasingly felt that positive change from within the company was not going to happen, she began to contemplate the possibility of going public with her concerns: “No problem is solved within the frame of reference that created it. If only ineffective solutions existed inside the company, maybe we needed the public to come save Facebook” (Haugen, 2023: 216). To achieve this, she began working closely with Jeff Horwitz from the Wall Street Journal, who had contacted her and several other employees at Facebook via LinkedIn (Haugen, 2023: 236). Haugen was the only one who decided to start a conversation and, eventually, collaboration with Horwitz. Horwitz had been working on Facebook and algorithmic power for several years and contacted Haugen and her colleagues because he recognized that their expert knowledge would be needed to move his story forward with evidence and credibility. In order to avoid detection by the company, Haugen began taking screen images of documents that provided insight into Facebook’s strategies and what they knew about their negative effects. In the end, she “had captured twenty thousand pages of documents for the public so the case could not be refuted” (Haugen, 2023: 282).
The collaboration with Horwitz was instrumental to Haugen’s whistleblowing: “I would not have thought to document those topics if Jeff hadn’t helped me appreciate what the world wanted to know or, really, deserved to know about them” (Haugen, 2023: 282). The Wall Street Journal’s work on Haugen’s material was presented in several articles, The Facebook Files (Wall Street Journal, 2021), beginning in September 2021. In 2023, Horwitz (2023) published a book of his own on Facebook’s algorithms. Parallel to her work with Horwitz and the Wall Street Journal, Haugen also worked with her lawyers to present material to The Securities and Exchange Commission (SEC) and the Senate’s Consumer Protection Committee. She became publicly known as the Facebook whistleblower when she gave an interview to CBS’ 60 Minutes on October 3, 2021. She currently works as an advocate for Big Tech transparency and ethics and has her own website (franceshaugen.com).
Profitable engagement
In the background section, I described how, in Haugen’s assessment, Facebook had changed its algorithms in 2017–2018 to create more user engagement through anger and negativity. In the world of social media, engagement is profit. Since Facebook is free to download and use, its income comes primarily from advertisers who pay according to the number of users that can be exposed to commercials: “Right now advertising-funded platforms are incentivized to make their products as ‘sticky’ as possible – every minute more they keep you online is another minute when you might view an ad or click on it and make them money” (Haugen, 2023: 325). This creates incentives to expand the number of users and engagement. Every company, says Haugen (2023: 221), “that reports how many users it has will be worth more if they have more ‘users’ of all sorts – bots included – just as a brick-and-mortar store would be if they claimed they had more dollars than they actually have.” As a result, she expands, “Facebook locked itself into spirals that were hard to exit once they’d begun . . . Stock analysts were endlessly hounding them for every penny of profit, but none asked about the costs external to Facebook that accompanied the ever-greater returns” (Haugen, 2023: 267).
The company was aware of the negative effects of its algorithmic set-up. According to Haugen (2023: 292), “Facebook’s own internal documents showed that Facebook and Instagram algorithms consistently promoted and pushed more and more extreme content over time as they blindly chased clicks and comments to drive up their content-agnostic goal metrics.” According to Haugen, one effect of this change was a surge in aggressive content as these forms of communication elicit more user reaction: “You could say something outrageous and have it go viral, but if the same person posted a fact-based rebuttal explaining why that post was nonsense, that correction would almost never be seen by as many people” (p. 150). This dynamic, she claims, was a driver behind the January 6, 2021 Capitol riots following the 2020 Presidential election: “Part of that echo chamber was fueled by people piling onto Stop the Steal posts to emote their anger. Every comment forced it again to the top of people’s feeds, retriggering patriots’ urges to protect the country” (Haugen, 2023: 253).
Facebook did have an ethical component, Civic Integrity, which dealt with issues such as fake news, hate speech, and other potentially negative outcomes on its platforms. Before joining Facebook, Haugen had been hesitant and skeptical about the company: “The only reason I joined the company when I did was to be part of Civic Integrity. I thought back on some of the Facebook hires I was most impressed by and wondered if any of them would have come to Facebook if Civic Integrity hadn’t existed” (Haugen, 2023: 233). However, in her experience, the work of Civic Integrity was consistently deprioritized and underfunded (eventually, it was dissolved and reorganized) in ways that made it clear that integrity concerns stood in the way of company goal and profit metrics (Haugen, 2023: 184). This also manifested itself in the company’s unwillingness to direct sufficient resources to fact-checking in countries outside of the United States. According to Haugen, access to the Internet in many countries in the Global South is synonymous with Facebook. For example, she argues that in Myanmar, the regime’s violent campaigns against the Rohingya minority to a large extent took place on Facebook, where extreme content and misinformation was able to circulate without any significant curbs and interventions from Facebook itself (Haugen, 2023: 165).
During her time at the company, Haugen found herself and her work caught between Facebook’s pursuit of profit and her own hopes that the company could still be saved from itself, as it were. This hope also delayed her decision to blow the whistle: “If you stayed, you knew there was at least one more person trying to keep the train from jumping the tracks. You might have your doubts about whether what you contributed to preventing disaster would be enough, but at least you were trying” (Haugen, 2023: 184). However, in the end, she concluded that the company would not change unless it was placed under the public spotlight.
Digital opacity
Haugen describes how the working of algorithms is surrounded by a kind of digital opacity. “It seemed unlikely,” she says, “that many on the outside could understand how Facebook’s unique culture birthed their unique closed-system software” (Haugen, 2023: 9). In this view, the functioning of codes and algorithms is hard to access and understand, even for lawmakers, journalists, and activists who work on the subject on a daily basis. As a result, Haugen (2023: 9) points out, “[t]he only path to deeply understanding these systems is by working at one of a handful of large tech companies in specialist roles.” She even states that at the time when she came forward, “there were maybe three hundred or four hundred people in the entire world who understood deeply enough how these systems worked.”
In the autobiography, Haugen makes several instructive comparisons with Nader’s (1965) critique of safety negligence in the United States car industry in the 1960s.
Without belittling Nader’s contribution, Haugen is adamant that cars are a very different kind of product than the codes and algorithms that drive platforms such as Facebook: “Unlike the data centers that host the code that produces our social media experiences, anyone with the money could buy a car, drive it, and if they wanted to, crash it. They could take the car apart.” Moreover, she claims, if they were unsure about how to interpret what they found when they did so, they could lean on institutionalized knowledge “in schools that had existed for decades researching the best ways to build cars and trucks” (Haugen, 2023: 320). The same point extends even to current, software-based products such as computers and smart phones: “People can and do take apart Apple products within hours of their release . . . Apple knows that if they lie to the public, they will be caught, and quickly” (Haugen, 2023: 4).
Codes and algorithms are also hard to “see” and scrutinize because they create user experiences that are never the same from one person to the next. Facebook provides “a social network that presented a different product to every user in the world.” In Haugen’s view, people are therefore severely limited by their “own individual experiences in trying to assess What is Facebook, exactly?” (Haugen, 2023: 4). It may well be evident that the algorithm produces a range of outcomes, but when these outcomes are different on every individual media screen, criticism is easily refuted by Facebook as anecdotal and subjective (Haugen, 2023: 4).
As Haugen notes above, it requires sophisticated expert knowledge, not only to be able to “see” the problems in the first place, but also to pass information on to others (the press, for example) in ways that appear knowledgeable and, therefore, worthy of public attention. Expert knowledge obviously also exists outside Big Tech companies, but the black box can only truly be understood from within and by those who make the machinery run. What was needed, then, says Haugen (2023: 10), “was someone who came from within the company and who was privy to the culture, internal machinations, and interacting demands that the different departments imposed on each other.” Only an expert insider “could provide the context and connective tissue to understand why so many smart, kind, conscientious people could render a product with such horrific and world-rocking consequences” (Haugen, 2023). For Haugen, part of the solution to the problems of digital opacity is to make social media and their algorithms more transparent to the public. The reason Facebook got out of control, she says, “was that it was never subject to oversight, there was no mandated transparency, and therefore the public didn’t know what it didn’t know” (Haugen, 2023: 312).
Discussion
I can now return to the argument I started with; that Big Tech whistleblowing is a distinct phenomenon in need of its own theorizing. To pursue this idea, I begin by refracting it through the historical-sociological framework, I laid out earlier. I use the Frances Haugen case as evidence and illustration, but also draw on key academic works on algorithms and digitalization to refine and expand her points. I then turn to a somewhat skeptical reading, which identifies how the role of Big Tech whistleblowing is also curbed by a range of challenges inherent in the Big Tech industry. Here, I incorporate the cutting-edge work recently published by US law scholars Wu (2024) and Bloch-Wehba (2024).
My aim here is not to single out Big Tech companies as somehow “worse” than everyone else. What I want to argue instead is that they are democratically problematic in a particular way that requires reflection for both academic and democratic reasons. Big Tech is of course an impossibly broad term. It is nonetheless useful because the “big” element marks out the power issues at stake. In this way, it is part of a family of other “bigs”: Big Oil, Big Tobacco, Big Pharma, etc. Each of these categories defines a group of companies with dominant positions in their respective industries. The term big, when it is used in these contexts, is rarely meant in a positive way. Rather, it says that these companies are so powerful that they can (and do) generate a range of negative outcomes for society. It may seem unfair to send Big Tech into the frayed reputation of these companies. Saying “Big Oil” immediately conjures up images of oil spills, Big Tobacco is forever tied to graphic imagery of cancerous lungs, and Big Pharma to lives destroyed by the Opioid Crisis in the US. What do we see when we say Big Tech? Perhaps an iPhone, the Facebook logo, or Google’s start page? Where Big Oil, Big Tobacco, and Big Pharma affect the natural environment or the human body in very direct ways, Big Tech is distinct in two ways that are no less consequential and in fact may run even deeper, socially and democratically: their operations are (a) opaque and intangible due to the digital nature of their technologies and have (b) deep, profit-driven social and democratic effects on communicative infrastructure.
Let me point out, before I continue, that when I highlight opacity and democratic effects here, I am not trying to say that other “bigs” are not opaque and do not have social and democratic effects. As I discussed earlier, via Parker (2016), opacity is in many ways a characteristic of all organizations (Parker, 2016). Surely, it would also be correct to say that other “bigs” have social and democratic effects. If we think again about Opioids, these drugs first and foremost affect the human body, but when we also speak about them in terms of “crises” and “epidemics” we clearly acknowledge that they have wide-ranging social impacts too. What I am after, then, is not to make Big Tech exceptional. Rather, what I try to argue is that Big Tech companies are opaque and socially and democratically consequential in new, rather specific ways that in my view are distinct enough to merit discussion in their own right.
The only ones to know
In the discussion of democratic disclosure, I argued that the whistleblower’s historical role in democracies revolves around their insider status as organization employees and professional experts, which often makes them “the first to know” about organizational wrongdoing. This point still stands. However, there are reasons to reconsider what it means to be the first to know in the context of Big Tech. As Haugen (2023) lays it out above, detecting and understanding how specific algorithms work requires sophisticated expert knowledge. While such expertise obviously also exists outside of the organization, the digital, intangible nature of algorithms makes it difficult to observe them from the outside. In a formulation that echoes Haugen’s (2023) experience, Kitchin (2017: 20) remarks how algorithms “are not open to scrutiny and their source code is hidden inside impenetrable executable files. Coding often happens in private settings, such as within companies or state agencies, and it can be difficult to negotiate access to coding teams . . .” (see also Bucher, 2018; Gillespie, 2017; Zuboff, 2022). Of course, their effects can in a sense be viewed on every screen, but this is still a primarily indirect way of seeing the algorithm; it is also, as Haugen (2023) notes, a fragmented observation where the algorithm manifests itself in an individualized manner from user to user. This unobservability makes it relatively easy for the creators and owners of the algorithm to control critical debates and claims as long as they come from the outside.
There is an extra opacity dimension at play here. While algorithms are formalized rules of selection and prioritization, they feed on the information, the endless flurry of digital footprints, that we, as users, constantly leave behind when we move around in online spaces (Alaimo and Kallinikos, 2019, 2021). Not only do we have limited access to understanding how Big Tech companies formulate their algorithmic rules; we are also pretty much in the dark as to what they know about us and, not least, how they use that knowledge to refine their products and practices. All in all, this creates a situation of considerable information asymmetry (Bloch-Wehba, 2024: 1510) among those who create our communicative infrastructures and the citizens who use and depend on them. This has important implications for the way we think about the democratic role of whistleblowers. To paraphrase Haugen (2023: 312), if it is true that increasingly the public doesn’t know what it doesn’t know, it seems that our dependence on whistleblowing employees to be the first to know grows (Kenny, 2023: 65). In fact, we might even say that employees are no longer simply the first to know, but also, to some extent and in some industries at least, the only ones to know. 3
Of course, this is a somewhat exaggerated way of putting it. Lawmakers, journalists, and intellectuals such as Doctorow (2021), Vaidhyanathan (2018), and Zuboff (2022) have long been criticizing social media platforms for many of the same reasons as Frances Haugen. Nonetheless, there is something about the nature of algorithms that tends to keep the outside observer at some distance. While Haugen’s comparison between cars and algorithms that I discussed earlier is obviously, to some degree, reductive of variation and complexity, it still illustrates an important difference: that digital technologies and products have a particularly intangible, non-physical character that makes it hard to “see” them and, conversely, easy for organizations to hide and shroud them in opacity. As I have said elsewhere, this is not a statement about the absence of opacity in other industries, but a note on a specific form of opacity that we primarily encounter in the Big Tech industry.
Shaping communicative infrastructure
Alaimo and Kallinikos (2019: 302) note that, “[t]he opacity and biases of measures and scores that social media produce would make them entertaining curiosities were they not seriously involved in our lives.” What is at stake for Alaimo and Kallinikos (2019), then, is no less than an “infrastructuring of sociality,” where algorithmic recommender systems create a world of stereotypical identities that are constantly reproduced by their own choices (Alaimo and Kallinikos, 2019: 298; Amoore, 2020: 4–5; Kitchin, 2017: 18). They also shape and transform the very way in which democratic participation takes place. The Storm on the Capitol on January 6, 2021, which Haugen (2023) also discusses, is illustrative. It would obviously be wrong to accuse Facebook for wilfully producing this outcome. Yet, there is mounting evidence that this ignominious event was at least partly attributable to the logic of the algorithmic recommender systems at work in Facebook. Facebook’s algorithms, say DeCook and Forestal (2023: 638), thus nurture an “undemocratic cognition” because they work to “systematically amplify the most inflammatory content on its platform, while also siphoning people into Facebook Groups that reflected and refined this kind of content.”
As I argued with Luhmann’s (1964, 1995) thinking on autopoiesis and “useful illegality,” all organizations are, in principle, suspicious because their actions are shaped by logics that are inward looking and therefore, at least potentially, blind to the way they might violate society’s norms and laws. In Haugen’s (2023) account it is evident, for example, how the profit logic underlies the democratically consequential decisions that Facebook makes regarding their algorithms. The godfather of public sphere theory, Habermas (2022), has recently worried about the effects of social media on the quality of public deliberation and democratic debate in a way that echoes Haugen’s (2023) views. Speaking about platforms such as Facebook, YouTube, and Instagram, he describes these as “companies that obey the imperatives governing the valorisation of capital . . .” (Habermas, 2022: 163).
Habermas’ concern with the way economic incentives negatively affect the public sphere is not a new one; it also informs his original work on the public sphere from 1962 (Habermas, 1962/1989). Here, he identified a growing commercialization of newspapers that in his view subordinated publicist orientations to the logics of profit. Yet, even under these conditions, legacy media such as newspapers and television were also governed by journalistic logics such as factuality, balance, and public service (Deuze, 2005). The challenge that comes with social media and algorithms is that despite their wide-ranging impact on the “infrastructuring of sociality” and, we might add, of democracy (Alaimo and Kallinikos, 2019), they do not work under “the duties of journalistic care” (Habermas, 2022: 167) but from a commercial logic, which is, furthermore, partly hidden from us because it is effectuated, as I discussed in the preceding section, through algorithms that we cannot “see.”
These observations mark another key distinction between Big Tech and other industries. While other “bigs” come with a range of negative effects for society, for humans, and for the natural environment, Big Tech is relatively unique in the kinds of effects that it has on democracy. As should be clear by now, I particularly have in mind the way major social media platforms now shape and organize our communicative infrastructure. It may seem that this is a rather soft or indirect type impact, but as the example of the Storm on the Capitol vividly testifies, communication is not just talk and information: the way it is organized has real-life, tangible effects. And this is just in the short term. As researchers have worried for some time now (Bennett and Livingston, 2018; Habermas, 2022), the way social media algorithms favor negative, extreme, and fake communication has the potential for a gradual erosion of social trust, civility, and coherence in our democracies. The problem here, of course, is not that communicative infrastructure is organized per se: it always has been, and as I suggested above, primarily by the so-called legacy media. The challenge rather comes from the fact that with the dominance of Big Tech and their algorithmically driven media platforms, this dynamic is becoming increasingly tied to a commercial logic.
The paradox of visibility
What I have said so far should make it clear why Big Tech whistleblowing matters. In fact, much of what we know about digitalization, big data, and algorithms, we know from “industry insiders” such as Edward Snowden, Christopher Wylie, Zach Vorhies, Peiter Zatko, Mark Klein, and Frances Haugen (Bloch-Wehba, 2024: 1527). Their effect on policy has been significant. Legislation such as the European Union’s ambitious GDPR framework, for example, was inspired to no small extent by Edward Snowden’s exposure of digital mass surveillance (Coyne, 2019). And more recently, Frances Haugen’s disclosures have motivated a lawsuit where 33 US states are suing Facebook/Meta for the negative effects their algorithms are having on the mental health of particularly children and teenagers (Kang and Singer, 2023). The scale of this lawsuit is comparable to the lawsuit brought by several US states against Big Tobacco in the 1990s (Coraiola and Derry, 2020). While this makes for a relatively positive reading, where Big Tech whistleblowing has made real, democratic impacts, there are also grounds for some skepticism and a deeper reflection on what I will refer to here as the paradox of visibility. Let me single out two issues that in different ways connect back to the theoretical framework and to Haugen’s (2023) disclosures.
First, Wu (2024: 26) has noted how the protection of Big Tech whistleblowers is surrounded by several uncertainties. One concerns the fact that the kinds of wrongdoings disclosed by Big Tech whistleblowers may not always constitute illegality in a narrow sense. This becomes a challenge because significant parts of existing protective whistleblower legislation primarily offer protection when disclosures relate to breaches of law and regulations. This lack of clarity reflects the fact that the operations of Big Tech companies exist in a legal gray area, where lawmakers are still trying to catch up with an industry that has long been allowed to operate in a relatively permissive legal environment (Bloch-Wehba, 2024: 1515). Without clear legal definitions of what constitutes wrongdoing within the Big Tech industry, the protection of whistleblowers necessarily remains patchy. Regulation is also made difficult because the negative effects of Big Tech technologies is a complex matter. In his discussion of AI technologies, Wu (2024: 26) considers, for example, how current whistleblower protection is typically bound up around dangers that are substantial and immediate. AI and algorithmic technologies do not necessarily possess these qualities. Rather, as I have discussed throughout the article, their consequences are social, incremental, and often oriented to the future. These complex cause-and-effect relations makes it challenging for would-be whistleblowers to build their arguments and be heard. It also makes it easier for the accused organization to refute their claims as overreaction and speculation.
Second, Big Tech whistleblowing is limited by the opaque and secretive nature of Big Tech companies. As I have outlined, Big Tech opacity comes from the sophisticated, hard-to-see and hard-to-understand nature of algorithmic technologies. There is, however, also a more willed concealment in operation. Every industry and every organization legitimately work with some kind of secrecy and discretion in order to protect its position and competitive edge. Yet, according to Bloch-Wehba (2024: 1517), “tech companies have embraced a culture of secrecy and have aggressively exploited trade secrecy and corporate confidentiality beyond ordinary expectations.” This can be seen in their employee policies, which are organized around NDA’s (Non-Disclosure Agreements) and surveillance (Bloch-Wehba, 2024: 1520). These conditions severely restrict the room of maneuver for would-be whistleblowers. While the secrecy and opacity may sometimes be pierced through by whistleblowers or regulators, the intangible, easy-to-hide nature of algorithms makes it is difficult to follow up and certify that demands for regulation have actually been followed. As Bloch-Wehba (2024: 1515) skeptically notes, the complexity of these technologies “is likely to make disclosures meaningless because algorithms might be adjusted and incorporate new data over time.”
It may also be an illusion to think of expert employees as having some kind of full access to the algorithmic machine room. While Haugen’s (2023) account in some ways seems to subscribe to such a view, there are reasons to be cautious. In a report by the Council of Europe (2017: 39), they note, for example, how the persons who are “implementing the algorithmic tools for applications may. . .not fully understand how the algorithmic tools operate.” 4 Given the long and growing list of Big Tech whistleblowers, companies are increasingly on guard against disclosure and intrusion. Apart from stepping up on the surveillance, vetting, and NDA policies that I have already touched upon, this may involve organizing and delegating work in a way where fewer and fewer people have a sense of the full picture, thus circumscribing the opportunities for coherent dissent and whistleblowing.
In summary, these observations should prompt a balanced understanding of the potential of Big Tech whistleblowing. While the disclosures by Frances Haugen and others have had very significant impacts, the issue of Big Tech whistleblowing also expose a certain paradox of visibility. As Zyglidopoulos and Fleming (2011) discussed already before the power of Big Tech had become fully apparent, the drive toward organizational visibility and transparency that characterizes late-modern societies can only ever be partial. This is so, they say, “because many of the crucial discussions and debates on the social and/or environmental consequences of corporate activities cannot adequately take place in the lay domain given the technical language in which they are embedded” (Zyglidopoulos and Fleming, 2011: 698). As Frances Haugen’s account vividly testifies, these concerns have if anything been exacerbated by the inaccessibility and technological opacity that surrounds the Big Tech industry today.
The notions of visibility and transparency that Zyglidopoulos and Fleming (2011) bring into the debate are important because the very idea of whistleblowing is in many ways predicated on a kind of liberal optimism where democratic progress is achieved through openness and disclosure (Kenny, 2023). While the new, important knowledge given to us by whistleblowers such as Frances Haugen in many ways confirms the power and utility of such a vision, Zyglidopoulos and Fleming (2011) also remind of us its inherent limitations. This is not to suggest that disclosure is futile, to put it a bit bluntly. On the contrary, as I see it, identifying the limits of the power of whistleblowing and dissent against the negative aspects of Big Tech, is a necessary starting point for generating real democratic change in this area.
Conclusion
The article has proposed that whistleblowing in the Big Tech era calls for new theoretical reflection in whistleblowing research. Theoretically, I have tried to ground this discussion in a historical-sociological framework, which highlights the relationship between whistleblowing and democracy. To give the discussion empirical direction, I analyzed Frances Haugen’s recent disclosure of Facebook’s algorithms. Two things emerged from this reflection that are key to understanding the nature of Big Tech whistleblowing. First, we are strongly dependent on Big Tech whistleblowers because the wrongdoings they expose are shrouded in opacity and secrecy to an extent where it becomes increasingly difficult for outside observers to make the same kind of impact. Second, the democratic stakes of whistleblowing within this area are high. Because Big Tech companies do not just offer products in a narrow sense but also order sociality, they need to be approached with a particular sense of suspicion. This idea is amplified by the fact that Big Tech productive activities based on a profit logic that tends to subsume social and democratic concerns. While these points in many ways call for more whistleblowing in order to democratize the Big Tech sector, I also tried to identify some of the limits to this potential. The uncertain status of whistleblower protection and a lack of clarity on what exactly constitutes wrongdoing within the Big tech industry limits the room of maneuver for would-be whistleblowers. Furthermore, the opaque nature of algorithmic production provides companies with significant advantages in maintaining a level of secrecy around their operations. This amounts to what I called a paradox of visibility: On the one hand, the growing list of whistleblowers to expose the negative consequences of the Big Tech industry seem to suggest a new level of visibility within the sector; on the other hand, these drives are circumscribed, perhaps even unrealizable, because of the special characteristics of the sector.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Research involving Human Participants and/or Animals
Not applicable.
Informed consent
Not applicable.
