Abstract
When faced with conflicts, social media platforms harken back to their front-facing, user-friendly documents. These documents, often called community standards, or something similar, lay out the practices allowed on their sites. It is well documented in legal scholarship how technology companies incorporate particular First Amendment jurisprudence into these community standards documents, and this work aims to empirically examine this claim. Specifically, we were interested in how the backbone of American free expression—the marketplace of ideas metaphor—was incorporated into these governing documents. We conducted a textual analysis of five US-based social media platforms (Facebook, Twitter, YouTube, Instagram, and Tumblr) to analyze how the marketplace of ideas metaphor may be invoked. We found these documents do rely heavily on the metaphor for presenting governing strategies. They also rely heavily on an oft-referenced ambiguous moderation line and the idea of a singular, global, borderless community, both of which bolster the marketplace metaphor. Given this, US-based social media platforms are holding the rest of the world to US-based ideas of free expression, thus engaging in digital manifest destiny.
Keywords
Introduction
Within the last few years, social media platforms have been at the center of discussions about what can, or should, be allowed on their sites. Sites such as Facebook, Twitter, and YouTube have found themselves embroiled in conflicts that have forced them to justify their policies and actions (or lack thereof). From the Cambridge Analytica data scandal to the multi-platform suspension of right-wing conspiracy theorist Alex Jones, social media are increasingly exhorted to be arenas and arbitrators of the digital public sphere.
Questions about policies and moderation on these platforms refer back to the same source: documents written by these sites to lay forth their policies and user expectations. On Facebook, this document is referred to as the Community Standards. On Twitter, it is the Twitter Rules. Regardless of title, these documents decree principles, outline rules, and set standards for content and practices on their respective platforms. These documents are of particular note because platforms are not typically the creators of content on their sites but the arenas for content. In addition, they referee disputes about said content. In this way, “platforms don’t make content, but they make important choices about that content: what they will distribute and to whom, how they will connect users and broker their reactions, and what they will refuse” (Gillespie, 2018b, pp. 254–255). Issues and questions regarding these brokerages harken back to community guidelines, and these governing documents can help mitigate or potentially aggravate conflict flashpoints.
However, for many, these documents are too vague, and when they are enforced, they are not enacted satisfactorily, effectively, or equally (Crawford & Gillespie, 2016). Similarly, criticisms lobbed at platforms accuse them of protecting their worst offenders of hate speech, harassment, and abuse (Phillips, 2015; Vaidhyanathan, 2018). According to Gorwa (2019), this online arena is, and remains, “fragmented between the platform companies (the architects of online environments), users (as individuals making decisions about their specific behavior in online environments), and governments (as the entities setting the overall ground rules for those interactions)” (p. 855). Governments enter the fray in myriad ways, and oftentimes, this is not through explicit intervention but rather from in-house, with the incorporation of American values into platforms architecture, policies, and way of conducting business. Ammori (2014) elaborates how ways of thinking about the First Amendment enters into platform community standards, noting “The First Amendment . . . still influences top tech lawyers tremendously . . . it does so not as law but as a way of thinking about the issues and viewing the world” (p. 2283). Platforms incorporate values into their infrastructure and policies (Roberts, 2019; van Dijck et al., 2018), and American free speech doctrine is one such ideology. However, American free speech doctrine ideology is far from a mortar that unifies platform companies, users, and governments (see Gorwa, 2019). It is precisely against this tripartite fragmentation that First Amendment ideology is particularly complex in social media environments, given the haphazard ways community guidelines often seem to be applied, as well as the amount of hate speech, harassment, and abuse that occurs on these platforms.
Such complex fragmentation is one of the reasons why legal issues, freedom of expression, and social media platforms have a tenuous relationship, and government intervention does not always seem to be the answer. Many legal scholars reject government intervention in platform content moderation, citing overreaching governance (Harris et al., 2009; Jones, 2018). Platforms themselves often reject government interference for similar reasons, harkening back to Silicon Valley’s cyberlibertarian traditions of prioritizing individual liberties online by minimizing government interference that might stifle innovation and creativity (Roberts, 2019; Vaidhyanathan, 2018). This is one reason, following criticism of Facebook’s alleged role in the spread of misinformation on the “Brexit” referendum and the 2016 US presidential election, Facebook CEO Mark Zuckerberg posted a 6,000-word treatise on his own profile, trying to convince individuals the platform could handle the pressures of a changing world. Zuckerberg tried to reassure users that the platform’s agenda had been updated, moving from “connecting people” to “building social infrastructure” that could fix these problems (Facebook, 2017). The problem with this, however, as Nieborg and Helmond (2019) point out, is the position seems to imply “that the solution to Facebook is simply more Facebook” (p. 199). The idea that the solution to something is simply more of that very thing bears a striking resemblance to the legal metaphor often used to uphold and justify freedom of expression in the United States: the marketplace of ideas. While such a conclusion may seem apparent, Nieborg and Helmond (2019) do not explicitly or implicitly reference the marketplace of ideas in their work and instead focus on the ways in which Mark Zuckerberg proselytizes in the wake of troubles to convince users of the platform’s essentialness. Taken another way, we interpreted Zuckerberg’s comments through the lens of how American values of freedom of expression are written into platform policies and infrastructure, specifically through the ideological metaphor of the marketplace of ideas (Ammori, 2014; Klonick, 2018; van Dijck et al., 2018).
The marketplace of ideas rose to prominence in Supreme Court Justice Oliver Wendell Holmes’s dissent in Abrams v. United States, in which he wrote “the best test of truth is the power of the thought to get itself accepted in the competition of the market” (Abrams v. United States, 1919). For over a century, Justice Holmes’s view has become the backbone of freedom of expression theory and practice (Harris et al., 2009; Jones, 2018; Joo, 2014; Schroeder, 2016). Proponents of the marketplace of ideas metaphor advocate all speech, even harmful speech, should be given a spot in the arena. Subsequently, rational individuals, capable of determining truth and credibility, will weed out falsehoods, and the best idea, or “truth,” will rise to the top. Censorship is antithetical to the marketplace of ideas. Instead, the marketplace suggests the solution to harmful or inaccurate speech is simply more speech to correct the record (Joo, 2014).
Considering the marketplace of ideas on the internet is not without its problems. Given the prevalence of cyberbullying, hate speech, misinformation, and trolls found online, we know that the best idea does not always rise to the top. However, as freedom of expression went digital, the marketplace metaphor followed, with many scholars noting the challenges of adapting the metaphor for the internet (Harris et al., 2009; Jones, 2018; Medeiros, 2017). According to Gillespie, “free expression and vibrant community have long served as the twin principles for the social web: here they again provide discursive frames not just for celebrating social media for justifying its self-regulation” (p. 49). The marketplace of ideas metaphor has also become a footing on which social media platforms can argue for their own self-regulation and enforce (or not enforce) their community guidelines. Community guidelines are value-laden documents, and some of the very values they espouse are also used to justify their self-regulation and lack of government intervention.
Scholars have noted how community guidelines are informed by a particular First Amendment jurisprudence (Ammori, 2014; Klonick, 2018), and this article aims to empirically examine that claim. To do this, we conducted a textual analysis of five US-based social media platforms (Facebook, Twitter, YouTube, Instagram, and Tumblr). The dominant themes that emerged worked in service of upholding the marketplace of ideas metaphor in these governing documents by focusing on a singular, global community, and an oft-referenced but never defined moderation line. Taken together, these community guidelines’ discursive strategies rely heavily on the marketplace of ideas metaphor to bolster a particular conception of American free speech. However, metaphors are not often useful for the material realities of individuals’ lives, and Klonick (2018) argues, the law reasons by analogy, yet none of these analogies to private moderation of the public right of speech seem to precisely meet the descriptive nature of what online platforms are, or the normative results of what we want them to be. (p. 1662)
The shortcomings of legal metaphors, when combined with the transnational status of platforms and their promotion of a singular global community, demonstrates how platforms engage in a form of digital manifest destiny—writing American First Amendment jurisprudence into architectures and practices used around the world by billions of users in numerous countries with varying free expression laws of their own. By empirically examining the marketplace of ideas and American jurisprudence claim made by legal scholars, this work also has theoretical implications for future studies examining content moderation, platform governance, free speech, and American First Amendment ideology on social media.
Literature Review
Social Media Platforms
We follow Gillespie (2018) in defining social media platforms as “sociotechnical assemblages and complex institutions” (p. 18): Platforms are online sites and services that (a) host, organize, and circulate users’ shared content or social interactions for them (b) without having produced or commissioned (the bulk of) that content (c) built on an infrastructure, beneath that circulation of information for processing data for customer service, advertising, and profit. (Gillespie, 2018, p. 18)
Platforms do not neutrally exist, as the human beings behind their policies and infrastructure engrave specific norms and values within the sites (van Dijck et al., 2018). These norms and values run deep and are often baked into the platform’s architectural foundations (see Noble, 2018). Therefore, although platforms are intermediaries in how “they mediate between users who produce content and users who might want it” (Gillespie, 2018, p. 256), this bridging stages connections and manages interactions in specific ways (van Dijck, 2013).
Platforms connect users and content, but their chief commodity is content moderation (Gillespie, 2018). Often, community guidelines are the public’s first encounter with social media policies, and the guidelines come to the fore when moderation is required. Platforms, therefore, become the arbitrators for what is and what is not allowed in the public sphere. For instance, debates regarding breastfeeding on Facebook and #FreeTheNipple on Instagram demonstrate that permissible social media content is always already situated within larger cultural discussions and values. In fact, it is often these values that are at stake in the struggles of platform organization and moderation (van Dijck et al., 2018). Community guidelines are the documents that are referenced when platforms decide to uphold or reject certain values, and these documents may evolve over time in response to changing cultural norms. Varying degrees of explanation and justification underscore these documents and decisions (Gillespie, 2018). These documents reflect the ways platforms govern, even if that governance is extremely hands-off. Given this, questions of free expression are paramount to understanding community guidelines.
The Marketplace of Ideas on the internet
As previously noted, the marketplace of ideas metaphor for American speech protections grew out of Justice Oliver Wendell Holmes’s dissent in Abrams v. United States, arguing that all speech should have a chance to be heard. Holmes adopted the premise from Enlightenment era thinkers, who contended a single truth is ultimately attainable, and the best course of action for free speech is an open trade of ideas. This is a maximalist perspective in which proponents largely accept that a competitive and free market allows for all ideas to be heard and the best accepted (see Medeiros, 2017). It is important to note, however, that at its core, the marketplace of ideas metaphor is just that—a metaphor. It is meant to guide, not be an empirical or literal step-by-step roadmap.
Regardless, the marketplace of ideas metaphor has been the backbone of American freedom of expression for over a century. However, it has not been without its critiques. Broadly speaking, critics of the marketplace contend that one, there is never just a singular truth waiting to be discovered; two, power is not always equal among those in discussions; and three, groups possess specific biases (Joo, 2018; Medeiros, 2017; Schroeder, 2016). While the marketplace metaphor can be useful in sifting through certain elements of free speech, the metaphor is ultimately based on an assumption that overlooks social nuances and power imbalances.
A predominant reason for these problems persisting in the marketplace of ideas is because of the strong Lockean underpinnings of the metaphor (Harris et al., 2009). John Locke’s influence comes through most predominantly through the idea of a singular prevailing truth waiting to be discovered. This assumes separations between individuals, societies, and truths and ignores lived experiences and the ways these sociocultural, technological, and individual factors are all enmeshed together in a dialogical relationship, but the United States remains “Lockean to the last breath” (Harris et al., 2009) in its understandings of free expression. This creates a terrain in which individuals, the communities they make up, and their speech within said communities are treated as discrete elements and not a complex dialectic.
The marketplace of ideas metaphor bolsters American freedom of expression, but it is not the only idea that bolsters such expression in the digital age. Specifically, Section 230 of the Communications Decency Act (CDA) has been a major factor in how free expression functions online. Section 230 states, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230, 1996). In other words, internet platforms cannot be held responsible for the content posted or exchanged on their sites.
As legal scholars have noted, there is a substantial overlap between the tenets of the marketplace of ideas and the tenets of Section 230. Discussing the relationship between speech and counter speech, Medeiros (2017) invokes the marketplace of ideas and states, “Section 230 is a boon because it augments this paradigm, encouraging citizens to increasingly resolve disputes through ‘more speech’ rather than speech suppression” (p. 2). Section 230 emerged because the concern was that holding online service providers liable for inexact screenings [of their content] would not result in improved screening but rather no screening at all. This is because providers could avoid publisher liability if they act as purely passive conduits. (Citron & Wittes, 2017, p. 405)
Within the nexus of the marketplace of ideas and Section 230, social media platforms become a marketplace insofar as a marketplace is only a literal space for interaction. This is problematic because the twenty-first century internet of the late aughts is not the same internet that existed in 1996 when the CDA was passed. Citron and Wittes (2017) elaborate: As one court recently put it, “Congress did not sound an uncertain trumpet when it enacted the CDA, and it chose to grant broad protections for internet publishers.” For support, courts have pointed to Section 230’s “findings” and “policy” sections, which highlight the importance of the “vibrant and competitive free market that presently exists” for the internet and the internet’s role in facilitating “myriad avenues for intellectual activity” (Barnes v. Yahoo!, Inc, 2009, cited in Citron & Wittes, 2017, p. 407).
What is particularly noteworthy in the American government and judicial support for Section 230 is “the importance of the ‘vibrant and competitive free market,” which echoes marketplace of ideas tenets (Barnes v. Yahoo!, Inc, 2009, cited in Citron & Wittes, 2017, p. 407). In relying on their safe harbor protections, social media platforms essentially rely on the marketplace of ideas to protect themselves and allow for the maximum amount of free expression. Such a move is one predicated upon cyberlibertarian ideologies, which largely dominate the Silicon Valley tech sector and dictate minimal government regulation, censorship, and anything that would get in the way of a “free” world wide web. Therefore, the marketplace of ideas and Section 230, while American legal doctrine and jurisprudence, when applied by social media platforms, are also in service of upholding Silicon Valley cyberlibertarian ideology.
Although some scholars have argued that the marketplace of ideas still works online and it just needs some reworking (Harris et al., 2009; Jones, 2018), there is a fault line between the theory of the metaphor and its implementation in online contexts. Instead of becoming a vibrant arena in which all ideas are heard equally, social media have created echo chambers in which individuals only seek out opinions they agree with (Jones, 2018). Furthermore, the idea that speech should beget more speech is fraught online, as “individuals have difficulties expressing themselves in the face of online assaults. They shut down their blogs, sites, and social networking profiles not because they tire of them but because continuing them provokes their attackers” (Citron & Wittes, 2017, p. 420). Instead of engaging in a free flow of ideas, individuals may be forced out of the marketplace of ideas because platforms often do not step in to protect them from abuse, harkening back to passive conduits.
Method
Because our analysis was fundamentally concerned with a metaphor and its application, critical textual analysis was a fitting way to examine the marketplace of ideas tenets present in social media community guidelines. Given that community guidelines are front-facing, seemingly user-friendly documents, we assumed the phrase itself was not going to appear (something we did later confirm to be correct). This echoes Gillespie (2018) point regarding how overt meanings are difficult to come by in social media policies, which is an intentional and strategic move. Therefore, we knew we would be working in the realm of metaphor and latent meanings, not manifest content. This made textual analysis a useful methodology, given its ability to reveal social conditions in indirect and metaphorical ways (Phillipov, 2013).
Before moving forward, it is important to acknowledge how the documents we analyzed are ever evolving. In the future as platforms add new provisions and updates, these documents will most likely look very different from the ones we pulled for analysis in August 2018. This is not a limitation, but rather, according to Couldry and van Dijck (2015), a benefit: “a materialist account of the processes . . . must precisely hold on to the memories of those older versions of the social as a reference point against which to judge the hegemonic role in hosting social life now played by digital platforms” (emphasis in original, p. 2). Our work is not a materialist longitudinal analysis but an analysis of how these community guidelines look at one point in time. Future studies of community guidelines can harken back to this work as one such concrete memory Couldry and van Dijck (2015) speak of.
To understand how social media platforms specifically use the marketplace of ideas metaphor, we selected the five most popular social media platforms of 2018, as determined by SmartInsights (Burzler, 2018). These were Facebook, YouTube, WhatsApp, Facebook Messenger, WeChat, QQ, Instagram, Tumblr, QZone, SinaWeibo, and Twitter (Burzler, 2018). We were not interested in looking at social media–based chat applications, so we removed WhatsApp, Facebook Messenger, WeChat, and QQ. We were less concerned with private individual and group chats than more traditional conceptions of social media platforms, which may contain chat services. Second, given our focus on the marketplace of ideas, a fundamentally Western, but specifically American, conception of free speech, we removed QZone and SinaWeibo. We did so because it would be unfair to apply such a perspective to non-Western– and non-American–based social media.
After applying these parameters, we were left with five social media sites: Facebook, YouTube, Instagram, Tumblr, and Twitter. From there, we pulled community guidelines from each one of these sites and compiled them into a singular document. Given our parameters, these texts were all English-written (though language on social media platforms can change based on preferences) with potential for an international range of users. Such a range reflects the diverse breadth of potential users, and the demographics reached by such a variety on these social media sites hint at their diversity in terms of gender, race, nationality, ethnicity, socioeconomic status, and age.
Pursuant to Stuart Hall’s method for textual analysis, we engaged in an initial reading of all texts, followed by a “long preliminary soak” (Hall, 1975, p. 15). This is where we reflected on what we had just read in conjunction with our theoretical spotlights for this analysis. Then, each document was read and re-read in ways that privileged examinations of their stylistic patterns and recurrent words, themes, and ideas. The articulated instances in question did reflect latent invocations and dominant forms of the marketplace of ideas metaphor in the community guidelines, as well as how an ambiguous moderation line and a singular global community bolster this position of free expression.
Analysis
The Digital Marketplace of Ideas—Sold Through Community
Our primary research question focused on social media community guidelines for latent traces of the marketplace of ideas metaphor to better understand its application and implications surrounding freedom of expression. Throughout reading, we discovered that yes, the marketplace of ideas metaphor lingers below the surface in several ways. However, it does have one key moment of departure. While we discuss the departure from the metaphor shortly, we first would like to discuss the similarities. First and foremost, the metaphor most often appeared through discussions of community.
Within community guidelines, platforms mostly encourage users to embrace freedom of expression, and they present themselves as a place to do just that. According to Facebook, “we recognize how important it is for Facebook to be a place where people feel empowered to communicate, and we take our role in keeping abuse off our service seriously” (Facebook, 2018). In this way, Facebook presents themselves as simultaneously a place and a service, which creates a dialectic central to the platform’s offering of community and the marketplace of ideas. Similarly, Twitter (2018) says, “we believe that everyone should have the power to create and share ideas and information instantly, without barriers.” Instagram adds, “we want Instagram to continue to be an authentic and safe place for inspiration and expression. Help us foster this amazing community” (Instagram, 2018). Through the marketplace of ideas metaphor, social media platforms, vis-a-visa community guidelines, simultaneously position themselves as a literal marketplace (a gathering space and community where people can come together to discuss whatever they liked) while focusing on the service they provide (moderation of that space to ensure things run smoothly).
In addition to positioning themselves as a literal marketplace for individuals to come together and discuss ideas, the social media platforms examined encourage users to resolve their own disputes. This echoes the fundamental ethos of the marketplace of ideas in that the solution to harmful or inaccurate speech is more speech to correct it. Tumblr (2018) says, “we encourage you to dismantle negative speech through argument rather than censorship. That said, if you encounter anything especially heinous, tell us about it.” While Tumblr does indicate a willingness to step in, what is considered to be “especially heinous” is not defined. This leaves room to question when one should stop the “argument” and move toward “censorship” (as it reads in Tumblr’s governing document, coming to them would amount to a form of censorship). Instagram (2018) has a similar sentiment, in which they say, “many disputes and misunderstandings can be resolved directly between members of the community.”
Even in the face of negative speech, freedom of expression remains a key tenet. Facebook says they “err on the side of allowing content, even when some find it objectionable, unless removing that content can prevent a specific harm” (Facebook, 2018). Twitter (2018) promotes that they “believe in freedom of expression and open dialogue, but that means little as an underlying philosophy if voices are silenced because people are afraid to speak up.” YouTube (2018) points out they “encourage free speech and try to defend your right to express unpopular points of view.” Tumblr (2018) is “deeply committed to supporting and protecting freedom of speech” and Instagram (2018) “want[s] to foster a positive, diverse community.” This is the fundamental core of these governing documents, protecting free speech above all else, even if others do not always agree with it. This sentiment is key to the social media platforms analyzed and the marketplace of ideas metaphor.
Leaving the Marketplace?
However, in our analysis, we found one key departure from the marketplace of ideas metaphor and that was in the repeated suggestion to block, unfollow, or hide fellow users. To be sure, blocking and other similar actions are complex practices that deserve more analysis than we can provide here, but we are interested in these practices as points of departure within the marketplace of ideas metaphor that is otherwise prevalent in these documents. These departures are discussed below.
If the marketplace of ideas metaphor contends the solution to “bad” speech is simply more speech, then blocking, unfollowing, or hiding content, while sometimes a necessary maneuver, cuts it off at the knees. That being said, the idea of blocking appeared numerous times in these community standards documents. Facebook (2018) says, “we also give people the option to block, unfollow, or hide people and posts, so that they can control their own experience on Facebook.” Tumblr (2018) encourages, “if anyone is sending you unwanted messages or reblogging your posts in an abusive way, we encourage you to be proactive and block the hell out of them.” YouTube (2018) suggests, “try deleting comments and blocking the user if another user is bothering you . . . you can also turn comments off for any video or manage comments by requiring pre-approval before they get posted.” In these cases, platforms do not concede the solution to “bad” speech is more speech—they encourage the very gatekeeping strategies often avoided by the marketplace.
This blocking exception does not mean the marketplace of ideas metaphor is not present in other ways. Tenets of the metaphor are present, as indicated above. In fact, this departure aligns with what many proponents of the metaphor have argued—that the marketplace of ideas metaphor still works in the internet age, it just needs tweaks (Harris et al., 2009; Jones, 2018). It could even be argued that micro-level blocking is better than the platform stepping in on a meso- or macro-level. According to Harris et al. (2009), “an essential element of the Lockean philosophy of tolerance is the implicit acceptance that freedom is indivisible; that abhorrent and offensive speech is the price democracy pays for embracing and protecting values of free expression and equality” (p. 176). In this way, per the proponents of the marketplace of ideas, blocking is an individual sacrifice for the greater good of collective freedom of expression. However, these are the same practices that contribute to the echo chambers and filter bubbles of social media in which individuals only surround themselves with content they agree with (Pariser, 2011). Within the marketplace of ideas, blocking is a nebulous practice, seemingly supporting the marketplace in moderation but crippling it in excess.
The Ambiguous Moderation Line
Related to the idea of freedom of expression is our second main finding in this analysis: the repeated referencing of a “line” in content moderation and acceptable platform behavior. The phrasing of acceptable and unacceptable behavior was frequently constructed around the idea of a line or crossing a line. Twitter (2018) writes, “we prohibit behavior that crosses the line into abuse.” Tumblr says, “we draw lines around a few narrowly defined but deeply important categories of content and behavior that jeopardize our users, threaten our infrastructure, and damage our community” and, regarding self-harm, “we will only remove those posts or blogs that cross the line into active promotion or glorification of self-harm” (Tumblr, 2018). Instagram favors “boundaries” over “lines,” but echoes a similar sentiment: “we’re committed to these guidelines and we hope you are too. Overstepping these boundaries may result in deleted content, disabled accounts, or other restrictions” (Instagram, 2018).
However, it is YouTube that relies the most on crossing a line. The mention of a line crops up repeatedly in YouTube’s “Policies and Safety” document: “if harassment crosses the line into a malicious attack it can be reported and may be removed”; “in cases where videos do not cross the line, but still contain sexual content, we may apply an age-restriction”; “we draw the line at content that intends to incite violence”; “there is a fine line between what is and what is not considered to be hate speech”; “in cases where harassment crosses the line into a malicious attack . . .”; and “respect people’s opinions online but know when it crosses the line,” among myriad others (YouTube, 2018). Why YouTube relies on the idea of a content moderation line much more than the other platforms analyzed may relate to their specific dynamics and content volume (Burgess & Green, 2018). However, the references to an ambiguous line harken back to ambiguity in content moderation on YouTube, writ large, which may be an indicator as to why bullying, hate speech, and radical content remains a substantial problem for the video sharing site (see Donovan et al., 2019).
Overall, however, the idea of crossing a line helps to understand how social media platforms negotiate—or absolve—their position in content moderation. Even though a line or boundary is frequently invoked, it is never defined. This puts the imperative on the user, not the platform, to know when something may be too inappropriate, even for the marketplace of ideas. The platforms never commit to a definition of when the boundary has been crossed, and this leaves the user to navigate a community marked by imaginary lines. This makes sense, for under Section 230, in general, platforms do not face accountability if they remain passive conduits. The repeated mentioning of a content moderation line for acceptable and unacceptable behavior bolsters the “more speech over censorship” theme previously discussed. Platforms do not want to step in unless absolutely necessary and would prefer if users hashed out their own disputes. Medeiros (2017) issues a blistering critique of such (in)action, writing, “the compulsion to engage in counterspeech signifies an abdication of ethical duty by platform operators” (p. 2). The ambiguous moderation line is a precarious point of content moderation, and it complicates another offering of platforms—that of a singular, global community.
A Singular Global Community
We have already shown how the reliance on community in these governing documents reflects a metaphorical creation of the marketplace, but community remains important in these documents for another reason. Gillespie (2018) suggests moderation is the primary commodity offered by most platforms, and while we do not dispute this, we found platforms also offer a secondary commodity: a singular, borderless, global community. What emerges is the idea that these platforms are offering community—as the metaphorical marketplace—as something to be gained by participating on these sites.
These governing documents offer not just any type of community but a specific conception of a singular, global, borderless community. According to Facebook (2018), “the conversations that happen on Facebook reflect the diversity of a community of more than two billion people communicating across countries and cultures and in dozens of languages.” They also write “our standards apply around the world to all types of content” and “our policies may seem broad, but that is because we apply them consistently and fairly to a community that transcends regions, cultures, and languages” (Facebook, 2018). Facebook sells an idea where community is not bound by corporeal or geographical limits.
Facebook is not alone in offering this specific idea of community as a benefit for being on the platform. According to YouTube (2018), “when you use YouTube, you join a community of people from all over the world” and “the YouTube community is important to us and we want to see it flourish.” Tumblr (2018) writes, “Tumblr is a common ground for millions of people from a wide variety of locations, cultures, and backgrounds.” Instagram (2018) says, “Instagram is a reflection of our diverse community of cultures, ages, and beliefs” and “thank you for helping us create one of the best communities in the world.” Platforms position themselves as being able to make such a community happen. While it is natural for these platforms to take pride in the diverse geographical range of their users, the frequent incorporation of such discourse into their community guidelines is striking. What is ignored in this touting, however, are the material actualities and practices through which individual users interact, and how these interactions are often based in culturally specific practices. This also ignores how individual countries around the world handle their own negotiations of free speech and expression.
The Twitter Exception
Before moving into our discussion, it is key to note one exception to a singular, borderless, global community: Twitter. In our analysis, we found “The Twitter Rules” do not conceive of community in the same singular global way, and this is a departure in Twitter’s reliance on the marketplace of ideas metaphor. We initially arrived at this conclusion because the word “community” is not mentioned once in the Twitter Rules. Furthermore, the structure of the Twitter Rules focuses first on intellectual property, and they highlight the importance and safety of their platform infrastructure throughout. This was something other platforms analyzed did not do at all or did not mention to the same extent as Twitter. This included, but was not limited to, discussion as to how individual users were not to tamper with non-public areas of Twitter, including their computer systems, security authentications, and technical delivery systems (Twitter, 2018). While there are many reasons Twitter does not focus on community, we echo the one documented by Stephansen and Couldry (2014), who found Twitter was never intended to be a site of community building, but rather, a platform where information dissemination was priority. Therefore, Twitter may offer a marketplace of ideas, but it does so in a way that privileges an arena for freedom of expression. It does not prioritize community as something to go along with freedom of expression.
We anticipated departures and deviations in our analysis of these platforms because platforms are not monolithic. While some do own others and we recognized similarities in their governing documents (e.g., Facebook and Instagram), platforms do present themselves in different ways depending on the different needs. In fact, these deviations enhance our argument by showing the numerous ways platforms can adopt the marketplace of ideas, demonstrating the sweeping breadth of the metaphor. Such sweeping breadth, however, is not without tensions to unpack, and this is what we turn to below.
Discussion
The Failure of the Metaphor
We have shown throughout the different ways platforms adopt the marketplace of ideas metaphor in their governing documents. They do not do so in monolithic ways, but rather, they adopt the metaphor to meet their specific needs. While we recognize the marketplace of ideas is essential to understanding American freedom of expression, the metaphor is not without tensions. This is particularly the case, since, as we have demonstrated, the metaphor can be adopted or interpreted in myriad ways.
Critics have long contended the problem with the marketplace of ideas metaphor is just that—it is a metaphor. This is key to the denotative meaning of metaphor, in that something is applied to an object or action to which it is not literally applicable, only figuratively applicable. The marketplace of ideas metaphor seems to be adopted literally in certain situations within these governing documents. In literal sense, we do not mean that platforms say, “we govern this way because of the marketplace of ideas.” We mean that the curators of these documents subscribe to the marketplace of ideas ideologies and these often percolate below the surface (Hall, 1975; see also Ammori, 2014; Klonick, 2018). We extend this argument to be a literal adoption of the metaphor given the repeated words, phrases, and ideas found in these documents that prioritize free speech and more speech to correct the record. This is the core of the marketplace of ideas metaphor.
However, the problems with the marketplace of ideas, as related to social media governing documents, are primarily two-fold: first, there is a tension to unravel in heavy reliance on something meant to be figurative, not literal. Second, the marketplace of ideas metaphor was created in a much different era, when it may have been difficult to account for the challenges presented by the internet and social media.
First, the marketplace of ideas metaphor is just that—a metaphor. Joo (2018) criticizes the marketplace of ideas arguing a fundamental error in the metaphor is how it attempts to connect speech to economic markets. He elaborates, “the assumptions of the metaphor are inconsistent with theory and experience . . . economic markets do not produce normative or empirical ‘truth’” (p. 1). Therefore, it is a stretch to compare speech to economics as economics has no business with one truth rising to the top. Joo (2018) argues this makes the marketplace of ideas metaphor “ultimately little more than an antiregulatory assertion that provides no insights into how the law should deal with speech or markets” (emphasis added, p. 1). In this way, and as supported by our findings, social media governing documents can be seen as treatises on antiregulation which uphold Silicon Valley’s techno-utopian libertarian principles.
The second problem with the marketplace of ideas metaphor on social media relates to the sheer amount of speech online. Jones (2018) is largely a proponent of the marketplace metaphor, but even she concedes the massive amount of speech online “actually puts a strain on the marketplace—weight it may not have been designed to carry,” (p. 9) but given the blocking, unfollowing, and unfriending mentioned by platforms, what may actually be occurring instead is a divvying up of the marketplace. Instead of the marketplace bearing the load of myriad online speech, people create their own mini-marketplaces. This is the filter bubble problem (Pariser, 2011), and individuals are able to avoid speech they disagree with instead of seeking out robust debate to solve problems. At the same time, the marketplace metaphor falls apart in an online setting through the deployment of algorithms on social media, known for exalting sensational ideas by displacing the “truth” (Pariser, 2011; Syed, 2017), leading to issues of disinformation. While this cyber-environment might breed disinformation and reify bias, the marketplace metaphor exonerates platforms from actively taking part in its moderation.
It should be noted our goal here is not to upend a century of American freedom of expression law and policy. We merely mean to show how adopting the marketplace of ideas metaphor in social media governing documents is problematic in how it relates to larger issues of US-based social media trying to insert their influence on a global scale. While we have discussed the former, we now turn to the latter. A singular, global, borderless community is not without its problems.
Digital Manifest Destiny
What remains complicated in these platforms’ specific emphasis on a global community that transcends borders is presented in their very words—their users come from all over the world. When users from all over the world are grouped together into a lump sum community, non-American users are seemingly held to American standards of free expression. We know that content moderation does vary by country, but when Facebook writes, “our standards apply around the world to all types of content” (Facebook, 2018), one cannot help but see the spread of the Silicon Valley’s American ideals on a global scale.
In the early days of the social web, ideas of “techno-tribalism” and “techno-Manifest Destiny” were common, and Sarah Roberts (2019), writes, “Although champions of ‘cyberspace’ often suggested limitless possibilities for burgeoning internet social communities, their rhetoric frequently evidenced jingoism of a new techno-tribalism, invoking problematic metaphors of techno-Manifest Destiny: pioneering, homesteading, and the electronic frontier” (p. 10). By empirically validating a claim made by legal scholars (Ammori, 2014; Klonick, 2018), we found this rhetoric has not gone away and has remained key to how platforms present themselves to their users. In incorporating tenets of the marketplace of ideas metaphor into their governing documents, and by simultaneously touting the importance of a singular global community, these American-based social media platforms engage in a form of contemporary digital manifest destiny. Although far from the nineteenth century belief that Americans were destined to claim land, the dominance of these social media platforms, and their incorporation of American free speech metaphors into their governance, spreads “Americanness” across the globe. However, far from actually spreading democratic values, manifest destiny was always more of a power assertion to lay claim to territorial dominance.
In this way, what we found lying below the surface of these community guidelines is actually Silicon Valley values disguised as American democratic ones. This spreads cyberlibertarianism in place of actual American free speech (noting, particularly here, that given Silicon Valley’s location within the United States, it is a particular conception of American-tinted cyberlibertarianism that is spread). Therefore, what comes across as American First Amendment jurisprudence is ultimately justification for such Silicon Valley cyberlibertarianism. The result is private regulation of online speech by platforms protected by policies and free from government interference, while simultaneously remaining true to their own cyberlibertarian values. Platforms then spread these values to users in their governing documents, creating significant implications to user participation in democratic culture. By enforcing their own ideologies in the guise of an antiquated marketplace of ideas metaphor replete with universal community and vague imaginary lines, social media platforms open themselves up for what we already know to be there—disinformation, bias, abuse, and harassment. Given the resistance to regulation and censorship within cyberlibertarianism, and given how cyberlibertarianism is at the core of Silicon Valley social media user policies, can platforms ever really be ideologically or practically aligned with assisting those in harm’s way on their sites, when doing so would be antithetical to what they believe?
This is where we are in agreement with scholars that note problems with US social media dominance (Gorwa, 2019; van Dijck et al., 2018) and advocate for a re-imagining of the marketplace of ideas metaphor (Schroeder, 2018) and free speech values on American platforms (Klonick, 2018). While it is not our opinion that censorship and government regulation is the definitive answer, the current digital landscape should demand some level of democratic protection—not American democratic-disguised cyberlibertarianism—that is inclusive of the cultural diversity it represents.
Footnotes
Acknowledgements
The authors would like to thank Tarleton Gillespie for his support in the preparation of this manuscript.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
