Abstract
This article analyzes grassroots opposition to the website Ripoff Report (RoR). RoR is a user-generated content (UGC) platform for “consumer reviews” about both business entities and, often, individuals. In America, Section 230 of the CDA (1996) empowers RoR to refuse removing even postings that have been judged defamatory. Instead, the site counsels rebuttal (“counterspeech”) or paying for its self-administered arbitration service—audaciously casting itself as a more efficient (for-profit) substitute for the court system. RoR therefore represents the liberal “marketplace” orientation of Section 230 taken to its logical extreme. Grassroots opponents claim that official legal deference to the content policies of sites like RoR creates a unique kind of symbolic and normative harm. Building on the existing practical critiques of Section 230, I argue that they implicitly invoke Donald Downs’ “community security” paradigm in a digital context. They call on both websites and government to increasingly prioritize protecting citizens from the indignity of confronting (what they see as) personally humiliating speech rather than simply counseling “more speech” as the solution. The RoR controversy thus gives us additional insight into the popular objections provoked by Section 230. Overall, studying them helps further our nascent understanding of the consequences and reactions when “platforms intervene” as regulatory forces.
Enacted in 1996, Section 230 of the Communications Decency Act has fueled intense debate over the regulation of online speech in the United States. Section 230 offers user-generated content (UGC) sites a “safe harbor” from tort liability as “publishers” of third-party postings that might harm reputation or privacy. If, say, a YouTube user libels somebody in a video or its comments, the site itself is protected. If a platform is not inclined to police the offensive or even libelous content of speech submitted by users, it has little obligation to.
The UGC consumer review platform Ripoff Report (RoR) has a notorious policy in this regard: it will
There are compelling arguments both for and against Section 230 that have yet to be reconciled in the avalanche of practically oriented writing on the matter. Any legislative revision of Section 230 will have to weigh the likelihood of platforms preemptively censoring otherwise protected speech against some measurement of the harm being facilitated. Such a computation is beyond the scope of this article. Instead, I focus here on how one of the most notorious beneficiaries of Section 230 exposes disagreement about the meaning of digital age speech norms and governance structures. Specifically, RoR helps illuminate a more fundamental concern in contemporary culture about the power of private platforms to decide what speech is and is not visible online.
The debate over RoR is, at bottom, a disagreement about the “marketplace of ideas” theory and its preferred remedy, “counterspeech.” Both the site and some defending its policies contend that if one is insulted or libeled on the Internet, one can simply respond. Section 230 is a boon because it augments this paradigm, encouraging citizens to increasingly resolve disputes through “more speech” rather than speech suppression.
A strain of popular opposition to this regime, however, has conversely argued that so much deference to the content policies of private technology platforms in fact causes a unique brand of reputational and psychological indignity. To them, the compulsion to engage in counterspeech signifies an abdication of ethical duty by platform operators and of protective duty by the state. In theoretical terms, the tenor of anti-RoR activism suggests a digital age extension of the “community security” paradigm advanced by political theorist Donald Downs (1985): they seek a speech regime that prioritizes the dignitary needs of citizens over a dogmatic defense of the liberty to speak. Taken to its logical extreme, this would require platforms to be much more conservative in policing any speech that provokes complaint. At the same time, the activists indeed prompt us to question the outcomes of the prevailing “marketplace”-oriented framework. Overall, studying activism against RoR helps further our nascent understanding of the consequences and reactions when “platforms intervene” (or opt not to) as regulatory forces (Gillespie, 2015).
Section 230 and Theories of Free Speech
Section 230 stipulates that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. 230(c)(1), 1996). Hosts of third-party speech can thus claim “safe harbor” from liability for the speech of others on their platforms. In some jurisdictions, the safe harbor defense only holds as long as the editorial or design parameters of the platform did not cross the nebulous threshold point of “developing” the content submitted by third parties (FTC v. Accusearch, 2009;
Roommates.com
Proponents of Section 230 typically argue that it would be impossible for sites to host user-generated speech without some sort of protection from liability for its content. In a recent series, the Electronic Frontier Foundation (EFF) effectively outlined this core defense: Section 230 allows UGC platforms to thrive. Given the sheer volume of content posted, without statutory protection from liability “most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we see, and what we do online” (Electronic Frontier Foundation [EFF], 2016). A Yelp lawyer confirmed this likelihood to EFF: “Absent CDA 230, websites like Yelp would be pressured to avoid liability by removing legitimate, negative reviews, and they would deprive consumers of information about the experiences of others” (EFF, 2016). Legal scholar William Frievogel (2011) contends that even a “notice-and-takedown” system modeled after copyright law would not work:
[a] notice-and-takedown procedure likely would result in sites taking down most of the content about which a complaint is filed—whether that content was truly objectionable or not. It simply would be too hard to review the validity of all of the complaints. (p. 41)
Defenses of Section 230 are thus steeped in the liberal logic of the marketplace of ideas. Descended from John Stuart Mill’s (2002)
Expanded “cheap speech” in fact represents the more perfect realization of the Supreme Court’s preexisting defamation framework. As Justice Powell wrote in the 1974
Contemporary skeptics fundamentally channel an older “market failure” critique (e.g. Fiss, 1996; Sunstein, 1995). While counterspeech might be an
Ironically, the safe harbor provision was originally conceived to give content providers incentives to be “good samaritans.” Critics like Chu and Leiter contend that it has in fact made them more reluctant to intervene. In
Critics like Tungate (2014) simply argue that specific blind spots in Section 230 (e.g. regarding “revenge porn”) require tweaks to the legislation. Others like Chu, however, call for outright repeal to force content hosts to affirmatively police their platforms. As prominent attorney and blogger Ken White (2015) has pointed out, a commentator like Chu probably “opposes more than just [currently] unprotected speech” and thus might not mind if many platforms simply folded. Leiter “remain[s] agnostic” on whether law should be modified to impose any liability on intermediaries for even negligently disseminating the otherwise protected “dignitary harms,” but advocates imposing negligence liability if they fail to act when notified of the existence of defamatory material (p. 171).
Grappling with RoR certainly implicates the above issues; at the same time, it also prompts some conceptual concerns that are less represented in this literature. Specifically, freeing private platforms like RoR from liability for third-party speech has a symbolic significance: it strikes some of those affected as an abdication of the state’s protective responsibility. Furthermore, even if one theoretically
These more conceptual concerns can be situated within longstanding First Amendment debates through the concept that political theorist Donald Downs (1985) has called “community security.” The provision of community security is a marker of state legitimacy:
One of the most basic and important functions of community and government is to protect its citizens from assaultive speech and marked incivility . . . a community that does not protect its citizens from unjustified psychological assaults . . . is not well ordered and cannot claim legitimacy. (p. 17)
When the neo-Nazi National Socialist Party of America (NSPA) proposed a march through Skokie, IL (home to many holocaust survivors), in 1977, Downs argues that those who invoked the bromide to “fight bad speech with more speech” neglected an important duty of the state to protect the Jewish residents of Skokie from having to countenance this speech at all. In doctrinal terms, Downs’ theory proposed loosening the prevailing “content neutrality” doctrine (which generally forbids the government from restricting speech based on its subjectively undesirable ideas) in order to accommodate state restriction of speech advocating “assaultive” beliefs. Such a standard would be difficult to apply reliably, but the call to prioritize community security has a more general conceptual resonance as a critique of the marketplace theory.
Specifically, Downs rejected the typical marketplace appeal to the notion of “republican virtue.” Communities cultivate “republican virtue” when they are self-reliant in confronting problematic forces within their own ranks. The emphasis on deliberation and counterspeech in the marketplace theory parallels this goal. In the Skokie conflict, some pointed to the fact that the public argument about the proposed Nazi march presented an opportunity for citizens to confront hate and to publicly reject anti-semitism. As one American Civil Liberties Union (ACLU) lawyer argued, “[t]he best consequence of the Nazis’ proposal to march in Skokie is that it produced more speech . . . [i]t stimulated more discussion of the evils of Nazism” (p. 112). Furthermore, the proposed counter-demonstration during the scheduled Nazi parade ultimately helped dissuade the NSPA from coming (p. 92).
Downs argues that these justifications ignore the pronounced negative outcomes. First, while the more outspoken survivors may have relished taking on the NSPA in the public forum, others were shaken by the prospect of confronting neo-Nazis on parade advocating the extermination of Jews (pp. 90-91). These citizens would simply have hid in their homes. A libertarian free speech regime that casts “republican virtue” as its ultimate goal might well succeed for the most self-confident or socially powerful, but it is simply threatening to those who happen to have different temperaments.
Just as the unfettered marketplace might require a combative temperament, it is also sometimes unclear why the “speech” produced in such confrontations is democratically valuable. In the Skokie conflict, the NSPA canceled the parade because of the threat of “massive retaliation” through physical violence. This hardly constitutes a victory for the so-called search for truth. For Downs, therefore, the kind of facile encouragement of self-help through counterspeech simply represents “one of the worst lessons that the law can teach” (p. 92).
The ultimate point of promoting community security as a “democratic principle that is coeval with free speech” (p. 114) is thus to foster a public forum that does not demand the fortitude to scream back at a procession of neo-Nazis. In principle, this position recapitulates the ultimate concern with democratic outcomes evinced in the aforementioned “market failure” theories and the seminal writings of Alexander Meiklejohn (1948). As Daniel Solove (2008) has summarized, in this view, speech protection is “important not because we should protect the individual’s desire to speak but because free speech is necessary for a robust political discourse” (p. 130). When participating in such a discourse simply feels frightening or demeaning, then perhaps the rationale for an unfettered “marketplace” is weakened.
Ultimately, viewing RoR activism through the prism of Downs’ theory of community security shows how free speech debates intersect with broader concerns about neoliberal governmentality. Does the current regime perhaps represent an analogue to what political scientists like Verkuil (2007) have decried as the “practice of using private contractors to perform essential or inherent functions in . . . government?” Has the government (unofficially) “outsourced” the most immediately consequential decisions about the visibility of speech to powerful technology companies, leaving citizens to fend for themselves in the process? In spirit, such an objection is what animates the RoR opposition. They resent the idea that the government has deferred such important decisions to (often mercenary) private intermediaries, and they call for such private intermediaries to embrace an ethical duty to provide “community security” given their position as de facto arbiters of speech visibility.
The Rhetoric and Policies of RoR
RoR has provoked a unique level of resentment compared to other UGC sites for “consumer review” complaints (which are often simply about individual people). For instance, no Twitter accounts equivalent in fervor to “@killtheRoR” turned up for other platforms in this research (Almon, 2016). This is likely because RoR adopts a kind of cavalier attitude when it comes to complaints about individuals on the site. Though no doubt sincere in his consumer protection convictions, the site’s eccentric founder and self-described “Ed-itor,” Ed Magedson, can appear callous. When one
The site’s design implicitly reflects self-awareness of its controversial policies. The Terms of Service (2016) list boilerplate prohibitions on things like disseminating viruses and copyrighted or “illegal” material. Yet the homepage clearly encourages vindictive contributions, displaying the slogan “don’t let them get away with it.” While visitors can browse categories of complaints (e.g. “Automotive” or “Community”) on a separate page, the homepage is dominated by pitches for its “corporate advocacy” program and by text explanations of its philosophy and guidelines for responding to complaints (Ripoff Report, 2016). Such design appears unorthodox compared to a site like Yelp, which features little more than a prominently displayed search box and a smattering of sample reviews for “hot new businesses” (Yelp, 2016).
The site’s policies evince a distinct philosophy about digital age dispute resolution. Magedson has described what he sees as the fundamental character of the online speech environment: “we live in information age and we will all be blogged somewhere, eventually[. . .]good or bad . . . [r]ight or wrong . . . [w]e will all be blogged . . .” (“Editor’s Comment,” 2016). Obscurity is portrayed here as an antiquated remedy when “we will all be blogged somewhere.” The assumption is that there is indeed value in posting these complaints publicly; the onus is on the subject of the complaints to respond appropriately.
The crucial tool for those who feel aggrieved by RoR posts is therefore not, say, a review filter like on Yelp. It is simply the fact that “[a]t least here on Rip-off Report the subject of a report can respond” (“Editor’s Comment,” 2016). This is of course unremarkable in itself. By its own description, RoR is different because its policies ensure that posts reflect a complete record: “[u]nlike other sites which accept payoffs and bribes to remove complaints, Ripoff Report has never and will never do so” (“Did You Know?” 2016). The goal is that “ALL [sic] complaints remain public and unedited in order to create a working history on the company or individual in question” (“Did You Know?” 2016).
Furthermore, the site claims to provide options that are in fact superior to litigation. The “VIP Arbitration” program supposedly represents an attempt to meet “overwhelming demand for a cheaper, faster, and easier alternative to litigation” (“About Us: Want to Sue Ripoff Report?” 2016). As described, the program mimics the parameters of defamation law in distinguishing actionable false facts from non-actionable opinions:
This program allows anyone to dispute the accuracy of any facts [sic] in a report (because there’s no such thing as a false
The program further mimics the legal system because “[a]s would be true in court, the complaining party has the burden of proving that a statement is false” (“About Us: Want to Sue Ripoff Report?” 2016).
When the subject of a report initiates the process, the matter is forwarded to a hired professional arbitrator. The arbitrator renders a decision about whether the statements in the review are justified after hearing evidence from both the poster of the review (if he or she can be reached) and the subject. If the arbitrator indeed finds statements in the report unjustified, the site displays the following modification:
Notice of Arbitrator Decision: A neutral and independent arbitrator has determined that the following Report contained one or more false statements of fact. The false statements have been redacted. (“Complaint 626838,” 2016)
Such an outcome thus might indeed provide clarity once a reader navigates to the actual RoR posting from search results, but the link itself will always remain even when the arbitrator has found that the posting contains inaccurate statements.
The implicit contention of such a policy is audacious: RoR is the ultimate arbiter of disputes over speech on its platform—and this configuration is in fact
One recent case reinforces how these content policies (and the Section 230 regime that enables them) are defended through a kind of extreme application of marketplace logic. Giordano v. Romeo: Summary (2016) was precipitated by RoR postings alleging that John Giordano (proprietor of an addiction treatment facility) was a “convicted felon.” Documents in the case indicate that Romeo (the poster) agreed to have the court enjoin her to remove the postings on RoR. When RoR predictably refused, the court enjoined it directly on the theory that “because of the refusal to remove, Xcentric [RoR] became ‘the publisher of the statements’,” thus forfeiting its Section 230 defense (“Giordano v. Romeo: Summary,” 2016). The Third District Court of Appeal in Florida eventually rejected this reasoning, calling RoR’s business practices “appalling” (and the site itself “a forum for defamation”) but not tantamount to publication of the third-party statements (“Giordano v. Romeo: Summary,” 2016).
The reception of the case in the tech policy press indicates how those who defend this application of Section 230 are essentially advancing a digital age intensification of the marketplace theory. (While they defend the policy status quo, it is important to note that those quoted here have all expressed distaste for RoR’s business model itself.) Attorney Paul Allen Levy and
In a way, this is a radical proposition: the architecture of the platform means that
Popular Opposition to RoR
The most visible campaign that casts RoR as an emblem of the shortsightedness of Section 230 is spearheaded by two activists, Janice Duffy and Michael Roberts, who reside in Australia but focus much of their attention on American policy issues. Roberts runs a reputation management and digital investigation company called Rexxfield; Duffy is a former health researcher who claims that she has been kept out of work by allegations on RoR that “ruined her life” (Wells, 2012). Their efforts appear to resonate with a segment of popular opinion regarding the site and Section 230. The Facebook page for “Join the Ripoff Report Revolt” (2016) has inspired a respectable following, collecting over 1.500 “likes” as of March 2016. Other pages have attracted a scattered following as well. One of Roberts’ petitions on the popular activism site Change.org, for instance, garnered 158 signatures for its call to “Boycott [Google] if they continue to partner with RipoffReport.com and Ed Magedson” (“Boycott,” 2014). Another imploring Google to “de-inde[x] Ripoff Report from Google’s search engine” received nearly 1.500 signatures (“Fight online bullying,” 2015).
Some of their work focuses on convincing those who advertise on RoR that it should sever ties with the website because of the kind of content that it hosts. As Duffy wrote on the homepage of her personal blog,
[i]n 2013 advertisers were alerted to the proliferation of vile and abusive content . . . on Ripoff Report. This was a success! When confronted with their brand next to hate speech and headlines that referred to women as “whores,” “sluts” and “skanks” (and worse) the advertising servers and their clients could not get their business off the website fast enough. (Duffy, 2016)
The link to evidence of its success on Duffy’s blog is a ruling denying RoR a preliminary injunction against the boycott sites, so the implied evidence is the very fact that Ripoff filed a lawsuit for tortious interference with business dealings. The boycott efforts thus at least created the perception of a threat to RoR’s advertising revenue (Xcentric v. Roberts, 2013). The pressure on advertisers might have had some direct effect as well. As one post in the Facebook group documents, Toyota Australia was “working toward removing [their] ads” in the fall of 2014 (“Join The Ripoff Report Revolt,” 2014). More fundamentally, the Facebook page alludes to various endeavors to effectively shame those who associate with RoR. One, for instance, lists “enablers” who are verified as participating in RoR’s corporate advocacy program (“Join the Ripoff Report Revolt,” 2015).
While the “Ripoff Report Revolt” Facebook group is nominally focused on RoR itself, other pages in particular seek to persuade Google to change its treatment of the site in search results and to take its job as a kind of conscientious corporate citizen more seriously. Roberts’ Facebook page for his “Bad For People” campaign, for instance, describes how its “primary objective is to force Google to demonstrate the social responsibility clearly implied in the same law [Section 230] that gives them immunity for the republication of defamatory statements and other word crimes” (“About Bad for People,” 2016). Elsewhere, Roberts has advanced his theory of what he calls Google’s “humiliation algorithm.” As he sees it, Google has an incentive to calibrate its algorithm in a manner that highly ranks gripe sites and other high traffic websites promoting controversial (and potentially reputation-damaging) speech because it helps drive Google’s own AdWords revenue (Rexxfield Michael Roberts, 2012a).
While Roberts himself attempts to persuade via the “marketplace” here, this kind of appeal also contains a thesis about the inadequacy of the marketplace framework in itself. If we are going to enable “cheap speech,” the logic goes, then the intermediaries who organize and host that speech must be more sensitive to popular demand for removal of content. Websites can only be sensibly relieved of liability for third-party content if they do so; rebuttal or the remote possibility of arduous legal action against the poster is not good enough. Duffy has herself suggested that such an approach should be uncontroversial: “[t]he ability to make a decision about whether material is likely to cause harm is not rocket science and . . . it only takes a few keystrokes to remove it” (CiviliNation, 2012).
The more common complaint about RoR and Section 230 is thus broader: why fuss over subpoenas and court orders when people clearly feel “dignitary harm” from the postings they seek to have removed? Duffy rejects any compulsion to I don’t believe that even if someone has had an affair with their best friend’s husband . . . that anyone deserves to be put in global stocks for what is essentially a private issue and [be] globally humiliated . . . I think they have an expectation for their privacy [and] not to be put on display to the world. (Skype Interview, 2015)
Revelation of an affair could fit the criteria for the public disclosure of private facts tort, as such information might well be humiliatingly revelatory and not newsworthy if the subject is obscure (“Publication of Private Facts,” 2016). Yet this kind of comment seems to yearn for a deeper realignment of the policy priorities that govern the speech environment of the web. It essentially asks us to instead start from the presumption that a website enabling people to be “put on display to the world” (regardless of whether the speech would be legally actionable or not) is out of step with the protection from exposure that a well-ordered society owes its citizens. In this sense, it is the normative counterpoint to Magedson’s portrait of the digital age inevitability that “sooner or later we will all be blogged.” Even if subjects of speech
The overall framework they advance, then, is one in which affording a site like RoR (or Google) the latitude to make judgment calls about removal or redaction represents an indignity in itself. RoR and any site hosting such content should face liability if they refuse to remove material (which would probably make them likely to take it down upon request) because this would shift the protective priority back to the subject of the speech rather than the speakers and intermediaries. The marketplace remedies that Duffy and her group pursue are, in this sense, only a symptom of a fundamentally unjust situation in which citizens have to negotiate with private intermediaries to have content removed in the first place.
Much of the material that Roberts has circulated regarding the site focuses on this indignity, alleging that the site is essentially an extortionate scam that receives tacit state approval. As he states in one video, “When someone posts a complaint, it can never be removed, even if the poster requests or demands it. Instead, they are referred to an arbitration program that costs thousands of dollars! That is pure extortion!” (“Introductory Video,” 2013). In this view, Section 230 is unwise not only because of the content on RoR itself but also because the law favors mercenary proprietors like Ed Magedson whose exclusive goal is to fleece innocent citizens when scurrilous things are said about them. Even if the site does not publish the content in a legal sense, it certainly anticipates that such content will be posted and that it thus can sell services to the desperate. Regardless of the legal merits of alleging “extortion,” Roberts implies that the state abdicates some of its duty to protect people by keeping Section 230 on the books. Like the residents of Skokie confronting the NSPA, both the law and powerful UGC platforms force citizens to defend themselves instead of providing refuge.
The bigger problem, therefore, is that allowing the site to essentially profit off of the misfortune of those spoken about creates the perception of a broader kind of political injustice. Specifically, it is as if Section 230 puts the state on the side of the hosts rather than those who feel victimized. One commenter describes the problem in these terms on the Facebook page for one of Roberts’ groups:
Sites like ROR allow anyone to say anything without even checking the credibility—it is a “bitch” site and defamation and libel are it’s tools to allow ROR to extort money from people who are trying to defend their damaged reputation by the sick minds of the people who put up the blog in the first place. It is simply a criminal site using extortion so
Another aggrieved subject of an RoR posting similarly wondered “how come the FBI does nothing about this RipOff Report and why do they let them terrorize small businesses that are struggling?” (Fred of Nyc, NY., 2016). The overarching theme of these comments is thus not just that RoR is itself objectionable, but that the government is implicitly endorsing it by letting the site refuse to remove content it passively hosts.
Sometimes the RoR activists describe the existing intermediary liability regime using the rhetoric of regulatory capture. When a widely supported bill nullifying “no reviews” clauses in contracts was announced in 2015, for instance, the RoR Revolt page declared that “[w]e need to make sure that these lawmakers don’t just make knee-jerk reactions based on lobbyists for the gripe sites who make millions of dollars from their defamation platforms” (“Join the Ripoff Report Revolt,” 2015). Roberts likewise argues that the safe harbor policy favors tech companies over the needs of ordinary citizens:
[F]aceless giants of silicon valley will turn a blind eye . . . they don’t take [seriously] the civil responsibility implied in the same good samaritan heading where they have the ability to remove offensive material without any fear of getting sued by the speaker or the author of that material. (Rexxfield Michael Roberts, 2012b)
The safe harbor can only exist in this formulation if websites are dutiful in “remov[ing] offensive material” when it is requested. Even if one has the fortitude to engage in counterspeech, the underlying problem is that the state has essentially ceded authority to “faceless giants of Silicon Valley” who are (naturally) most concerned with their own bottom lines. As one critical post on another consumer affairs website put it, “Ripoff Report doesn’t care about the wrongdoing in the world. They only care about how much money businesses will pay to cover it up” (Carl of Hanover, MA, 2016).
An Argument That Illustrates the Philosophical Divide
A public argument between Duffy herself and some technology bloggers perhaps reveals how the activists extend the “community security” paradigm to the Internet. In the summer of 2015,
Blogger Wendy Cockcroft (2015), for instance, lamented that Duffy was not doing more to take advantage of the very platforms for “cheap speech” on which she had allegedly been harmed:
Funny thing, though, this stuff works both ways. She could be bashing the hell out of them on her own blog, etc. and get them on the back foot, bringing in others who have been mistreated to report their woes.
Duffy’s response indicated that the expectation to “bash the hell out of them” in itself represented an affront to community security. Duffy essentially made recurring demands that the authors change the article or delete individual comments because of both her subjective emotional reaction to them and their alleged reputational harm. As she put it in one part of the exchange, the post was “designed to humiliate her”—ostensibly because it questioned the premise of her lawsuit and because some of the comments for the article referred to RoR postings about Duffy (2015a) herself. She repeatedly demanded that “factual inaccuracies” be changed in the article and asserted that “by refusing to remove comments linking [her] to crimes,” the site had “proven that [it has] no ethics” (Duffy, 2015b). To this,
The exchange is uncomfortable to read, but it is important in illustrating what simply looks like a disconnect between two conceptions of online intermediaries’ role in mediating speech on the web. As Duffy seemed to see it, the burden was on the site to vindicate her claims of harm and discomfort once she voiced concern. The site, however, saw it as Duffy’s burden to establish a suitably concrete factual grievance before it could be expected to do anything; the marketplace of ideas would be undermined if it simply complied with demands to remove content.
For Duffy, it would appear that the ability of
Given the hurdles to legal redress, it is perhaps not surprising that someone who feels hurt or misrepresented by online allegations would respond as Duffy did. In another instance, Duffy sent a barrage of tweets to UK MP Jo Johnson and the Sussex University administration imploring them to “please follow up on humiliation” after a Sussex professor wrote an article that expressed intellectual skepticism about her victory against Google but seems devoid of personal animus (Guadamuz, 2016). Roberts has likewise reacted to seemingly unremarkable scrutiny with allegations of illegal behavior against a journalist (Roberts, 2014).
As Justice Black famously pointed out, stifling speech just because an extreme reaction is anticipated threatens to create a “heckler’s veto” by which a hostile audience can effectively shut down a controversial speaker (Feiner v. New York, 1951). At the same time, reactions such as Duffy’s or Roberts’ demonstrated above hardly embody the kind of idealized “republican virtue” that putatively justifies forcing citizens to confront vitriol in the public forum. Instead, such participants might simply resent the feeling that they have been left with little recourse but to engage in counter-attacks; we should not be surprised when those counter-attacks are perhaps hyperbolic.
Conclusion
Popular opposition has specifically targeted RoR because it is emblematic—to an extreme degree—of the “marketplace” orientation of Section 230. The site and those defending its more perverse content policies contend that counterspeech is essentially all one would need because of the architecture and conventions of the digital speech environment. The opponents of RoR claim this fails to address what many find fundamentally harmful about the policy: it empowers private actors allegedly acting in bad faith to adopt whatever mercenary approaches they like.
For these opponents, the situation is first problematic in a material sense: it creates a mismatch between traditional legal boundaries and the practical remedies available. Sites can be cavalier about the most sympathetic requests. Even if one wins a defamation judgment or acquires a court order against the actual author of a post, one is still at the mercy of a site like RoR. There is also, therefore, a symbolic register: some opponents perceive the status quo as prioritizing the financial needs of large UGC platforms and companies like Google instead of the ordinary citizens who are discussed on such platforms. When platforms
For defenders of the current configuration, of course, it is the subjective (and one suspects expansive) removal standards demanded by people like Duffy that are the bigger threat. When even fairly benign, substantive criticism can trigger a conflagration over “humiliation,” one can perhaps fathom why a regime that prioritizes counterspeech might be attractive. It might force some to defend themselves in the public forum, but it also could help preserve a wide range of civil criticism made in good faith from being preemptively suppressed by overly cautious intermediaries.
Even if they do not succeed in precipitating legislative revision of Section 230, though, the popular opposition discussed in this article forces us to confront the fact that the “counterspeech” efforts idealized in the marketplace theory require significant emotional fortitude and attention. Duffy, for instance, has evidently spent years trying to counter the allegations and negotiating with Google and RoR. By her own description, she has been unable to work given the combined toll on her available time, reputation, and mental health (CiviliNation, 2012). Marketplace proponents would argue that this is unfortunate but a price worth paying; the analysis here is intended to demonstrate how such a view has also mobilized opposition that revolves around a distinct “community security” logic.
Furthermore, just as Downs argued of the Skokie conflict, a policy regime that emphasizes counterspeech and engagement in the public forum should be expected to sometimes lead merely to pugnacious confrontation rather than dialogue. Such exchanges are again not anathema to the marketplace framework itself per se. Without more voluntary efforts to help citizens remove material upon request, however, we should only expect popular opposition to the existing framework to grow louder—for better or for worse.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
