Abstract

The mythos of the open, interconnected, and uncontrolled Internet was once a powerful narrative. Initially regarded with great optimism, this developing medium was envisioned as democratizing social participation, engagement, and communication. To this point, in 1999, journalist and legal scholar Andrew Shapiro proclaimed: “Hierarchies are coming undone. Gatekeepers are being bypassed. Power is devolving down to ‘end users’… No one is in control- except you” (pp. 11, 30). However, decades later, in the wake of the Cambridge Analytica scandal, the 2016 US presidential election, and other well-publicized cases of social media misconduct, it has become increasingly clear that the Internet is not an unmediated and unregulated space that uniformly empowers users. Instead, there are a variety of covert corporate and institutional constraints that shape what appears online and what is concealed from public view. Tarleton Gillespie’s (2018) book Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media examines these forces, arguing against cyber-utopianism and masterfully demonstrating the internal logics, procedures, and motivations underlying social media content moderation.
Situated at the crossroads of Internet studies and political economy of communication, Gillespie’s book provides a much-needed corrective to the idea that platforms are unfiltered, participatory, and bottom-up sites of civic engagement. His lucid thesis is woven throughout: “Moderation is not an ancillary aspect of what platforms do. It is essential, constitutional, definitional. Not only can platforms not survive without moderation, they are not platforms without it” (Gillespie, 2018, p. 21). To demonstrate the critically important yet strategically concealed role of moderation, Gillespie employs a platform studies approach (Plantin, Lagoze, Edwards, & Sandvig, 2018) and draws from an eclectic mix of user guidelines, corporate statements, news articles, and interviews. Gillespie’s chapters are strategically organized to support his central argument. He first provides a crash course on content moderation, describing community guidelines (across a variety of social media sites), the mechanics and ecology of community flagging, and the role of global moderators who receive user complaints and enforce policies. After furnishing a behind-the-scenes account of moderation, Gillespie engages in a theoretical discussion, tracing the impacts of commercial speech regulation on both users and society at large. His work thus demonstrates the centrality of moderation to platforms and the stakes of entrusting our content to digitalized and corporatized “custodians.”
An extended focus on visibility constitutes the main strength of this book. Gillespie asserts that platforms purposefully mask the processes of content moderation to downplay their curatorial function, to maintain their ideological positioning as venues of open expression, and to avoid being held accountable (and even liable) for the millions of subjective (and at times discriminatory) decisions adjudicated each day. This system’s lack of transparency perpetuates the myth that platforms are “open, impartial, and unregulated” (p. 21), both naturalizing the role of social media sites and hiding their active and deliberate interventions. While the labor and procedures of content moderation remain hidden, the opaque directives of moderators impact users who find their participation (and the visible, public traces of such participation) circumscribed by the architectural and regulatory constraints of platforms. Exposing these processes, Custodians of the Internet is a deeply uncomfortable book to read, especially for those who have not previously examined the institutional logic and corporate motivations of platforms. It is as if after decades of driving, we are suddenly being asked to look under the hood and acknowledge that there are technical features outside our purview that influence how and where we can drive. This insight is both startling and critically important, encouraging us to think beyond our individual experiences and to consider the underlying structure of platforms.
A second major contribution of this work is the way in which it traces societal impacts of content moderation. Although platforms connect users across the globe, Gillespie argues that content moderation and selective filtering lead to a profoundly isolated experience online. Social media sites personalize content and moderation procedures (e.g., by nation, community, or even on an individual basis) in a secretive manner, leaving users unaware that what they see online may not be the same as what their friends experience. Gillespie provocatively suggests that “Facebook is really a multitude of Facebooks, appearing to be one public venue but in fact spun out in slightly different versions, theoretically in as many versions as there are users” (p. 195). Consequently, the selective nature of content moderation undermines the potential for platforms to serve as sites of mutual deliberation and democratic participation. Thus, by the end of his book, Gillespie effectively demonstrates the need to critically engage with content moderation and to study its operations, biases, and profound effects.
Although Custodians of the Internet is expertly researched and written, there are ways that Gillespie’s ideas could be expanded. First, his account unfolds from the perspective of platforms rather than users. Only one chapter explicitly draws upon interviews with users, and additional interviews may have shed light on the ways in which moderation exacerbates societal inequities and biases. A second critique relates to the framing of social media sites and the call to arms offered at the end of the work. Since Gillespie avoids taking a critical stance toward platforms, his lack of condemnation may read as an implicit condoning of corporate control and unfettered profit motives. Throughout the work, he portrays platforms as unwillingly bearing the mantle of custodianship, thrust into an “untenable situation” (p. 197) in which they must simultaneously appease users who believe they are moderating too little and users who believe they are moderating too much. While critical of the opacity of platforms, at the end of his book, Gillespie casts responsibility onto the public, suggesting that users must become “custodians of the custodians” (p. 212).
As Gillespie ultimately shifts the onus of responsibility onto users, his proposed strategy opens the door to a bevy of questions, including: Have users really been complicit in handing over the reins of public discourse to platforms? Must reform primarily come from individual users? And, if users take the call for collective governance and moderation seriously, what shape would this effort take, and how might it mitigate or reify existing inequalities? These questions may perhaps be best left for future research, but readers of this work are not provided tangible answers as to how they may enact Gillespie’s recommended process of reform. Thus, the book may be unsatisfying for readers looking for a critical presentation of platforms and concrete recommendations for holding these sites accountable.
Despite these critiques, Custodians of the Internet is timely, readable, and informative. This book would make a great addition to syllabi on Internet studies, cultural sociology, and political economy of communication and extends growing scholarship about the labor practices (Roberts, 2016), regulatory underpinnings (Klonick, 2018), and democratic concerns (Vaidhyanathan, 2018) of content moderation. In sum, Custodians of the Internet opens up a range of research (and regulatory questions) about the power of platforms, the nature of discourse and publics online, and the extent to which we truly want corporate, unaccountable “custodians” shaping user visibility while remaining themselves cloaked in secrecy.
