Abstract
Published in 2014, the Facebook “emotional contagion” study prompted widespread discussions about the ethics of manipulating social media content. By and large, researchers focused on the lack of corporate institutional review boards and informed consent procedures, missing the crux of what upset people about both the study and Facebook’s underlying practices. This essay examines the reactions that unfolded, arguing the public’s growing discomfort with “big data” fueled the anger. To address these concerns, we need to start imagining a socio-technical approach to ethics that does not differentiate between corporate and research practices.
Introduction
In June 2014, the prestigious Proceedings of the National Academy of Sciences published the results of a study completed two years earlier by a Facebook data scientist named Adam Kramer and two academics – Jamie Guillory and Jeff Hancock (Kramer et al. 2014). In the experiment described in the paper, the authors had altered the content presented in the news feed of 689,003 people during one week to assess whether or not exposure to emotional content by one’s contacts would alter what a person posted. Contradicting public narratives that suggested that streams of positive messages coming from their friends depressed Facebook users, the authors found that being exposed to positive content prompted people to share their own happy news. What followed the release of this paper quickly deviated from the study itself.
Outraged at the idea of researchers manipulating people’s emotions through Facebook, journalists and scholars began critiquing the “emotional contagion” study, focusing heavily on the question of ethics (e.g., Crawford, 2014; Lanier, 2014; Selinger & Hartzog, 2014; Tufekci, 2014; Watts, 2014). One of the driving questions was: should researchers have the right to alter what content people see on commercial sites without ethics oversight or informed consent? While ethicists debated the harm and violations presented by the study, legal scholars wrote public letters to the Federal Trade Commission and Office for Human Research Protections to demand investigations into the harms and violations (see Grimmelmann, 2014).
As social scientists argued over this, many computer scientists and industry practitioners responded with sheer confusion. The practice of A/B testing is commonplace in and essential to the production of algorithmically produced recommendations, which are the cornerstone of Facebook’s news feed. It’s not just that it’s standard practice; it’s the very foundation upon which these systems are built. Facebook is invested in increasing people’s engagement with the site; they invest tremendous computational resources to devise algorithms that increase “stickiness.” To do so, they manipulate the feed every day and are always looking for better ways to manipulate what people see. Thus, from this perspective, why is working with scholars and documenting the results of daily business practices so insidious? To complicate matters further, Facebook data scientists had previously published an article documenting how tweaks to the feed could alter people’s voting practices (Bond et al., 2012). What made this study different in the eyes of scholars and the public?
As debates raged, with hundreds of scholars and journalists chiming in, I heard two explanations dominate public discourse: 1) People’s emotional well-being is sacred; 2) Research is different than business practice. Both arguments, while reasonable on their face, fail to address the broader ethical questions introduced by Facebook’s algorithms, let alone explain why this particular study sparked so much outrage when other reported practices have not. What’s at stake vis-à-vis this study has less to do with the article itself than with the shifting context in which Facebook’s practices are interpreted. As more people begin to understand the manipulations possible through “big data,” there’s growing discomfort with those behind the helm. Lacking a coherent framework with which to talk about these issues, scholars and the public latched onto this study to channel their broader anger.
Market research suggests that the American public trusts Facebook less than either the IRS or the NSA, even following the controversies over the NSA introduced by Edward Snowden (Ekins, 2013). While widely adopted, people’s relationship to the company is filled with discomfort. They question the company’s motives even as they engage with the site. Recognizing this broader discomfort is an important context for understanding the public rage towards the ethical issues presented by the “emotional contagion” study. Because of this, any effort to address the ethical issues introduced by this study requires moving beyond critiquing Facebook’s data science research or questioning the particulars of this study and reconsidering how to hold public companies accountable for the decisions they choose to make on behalf of their consumers.
Understanding research at Facebook
Industrial research labs have many different flavors, but companies invest in basic and applied research in order to advance their mission as a whole. When companies benefit from patents, basic research is often encouraged to imagine the impossible. When companies are more focused on increasing advertising revenue, researchers are often incentivized to develop techniques that increase attention or product desirability. Unlike product development, most researchers in industrial research labs who are welcome to conduct basic research are not told what to research. They have unprecedented freedom, but they know that they will be rewarded for helping the company. As a result, researchers in these institutions often balance between their own scholarly interests and their desire for corporate recognition in deciding which projects to pursue.
Lest academics scoff at this trade-off, it is important to recognize that any scholarship that requires significant resources is shaped by such factors. While applying for grants from the National Science Foundation may seem more neutral than asking the Department of Defense or a company for money, anyone who has navigated the NSF knows that certain topics attract more attention at certain times than others. Academia has its own logic of what research is considered worthy and scholars regularly navigate their peers, administration, and funders in their effort to pursue science.
As is common at relatively young companies, Facebook’s research team is closely tied to and regulated to a certain degree by product. Distinct from the market researchers who must focus exclusively on product, Facebook’s basic researchers have significant freedom to pursue and publish research, but they must justify their efforts and get approval for publishing. Given this, it’s clear that Facebook’s public relations team never imagined that this study would prompt such a backlash.
From a product perspective, this study is clearly beneficial to the company. Facebook wants its customers to be happy. If the content they access made them sad, presumably they wouldn’t return to the site and that would be costly to the company. Although the effects of this study are relatively minor in a psychological sense, it nonetheless suggests that sentiment of content influences people’s practices. In essence, this study provides a clear product directive for Facebook: weight content with positive sentiment more.
At Facebook, research and practice is connected and, yet, the conversation that followed the release of this study focused almost exclusively on what is ethical in “research.” The narrative implied that research should be more “high brow” than product development and thus be held to a higher standard. It also presumes that it is easy to circumscribe research and differentiate what’s being conducted under the banner of science from that which is intended to advance product. This boundary making flies in the face of any commitment to an ethical society.
One dangerous presumption is that Facebook would be inherently more ethical if their researchers were required to go through an IRB. Outside of clinical trials which are uniquely regulated by the Food and Drug Administration, research that is not federally funded – including that which takes place at most corporate research labs – is not required to go through an institutional review board. This does not mean that corporate research is fundamentally unethical or, for that matter, that research that has IRB clearance is. IRBs are a valuable but imperfect mechanism of accountability that allow universities to distribute responsibility while still guaranteeing they have implemented a process for oversight. Their processes differ by university and many ethically thoughtful researchers have found their studies stymied by IRBs hell bent on minimizing university liability. While legal scholars have argued for consumer subject review boards in industrial research (Calo, 2013), ethicists have long questioned the ethical value of IRBs at all (e.g. Schrag, 2011), revealing the way in which adding an IRB to corporate research is not an inherent solution. While oversight mechanisms may help identify egregious violations, many controversial studies – like the “emotional contagion” study – would most likely pass an IRB examination. Companies are regulated, most notably by the Federal Trade Commission, and the costs of violating those standards are far greater than the consequences academics face when in violation of the IRB. Finally, and most importantly, the public does not automatically benefit when oversight simply becomes a process that researchers seek to avoid, as is often the case with contemporary IRBs. Ethics needs to be embedded into everyday practices, not outsourced to procedural review.
Ethics aren’t a checklist. Nor are they a universal. Navigating ethics involves a process of working through the benefits and costs of a research act and making a conscientious decision about how to move forward. Reasonable people – even experts – differ on what they think is ethical. And disciplines have different standards for how to navigate ethics. Rather than trying to implement university processes into industry, we need to instead train researchers and practitioners to systematically think about how their decisions alter the world in ways that benefit and harm people. We need ethics to not just be tacked on, but to be an integral part of how everyone thinks about what they study, build, and do.
Connected to the valorization of IRB procedures is an assumption that “informed consent” is the solution. In this study, researchers did not seek informed consent, just as product designers do not seek informed consent whenever they manipulate what is shared to users. This raises numerous red flags, but informed consent is not an ethical panacea. It’s far too easy to say “informed consent” and then not take responsibility for the costs of the research process, just as it’s far too easy to point to an IRB as proof of ethical thought. For any study that involves manipulation – common in economics, psychology, and other social science disciplines – people are only so informed about what they’re getting themselves into. You may think that you know what you’re consenting to, but do you?
Companies like Facebook know all too well how little people pay attention to efforts to inform them through text. To comply with legal requirements, Facebook provides an extensive “terms of service” document and asks users to click to consent. No one reads these. Average users don’t read the privacy notices or almost any fine print that Facebook serves to them. Critics complain that this is because Facebook’s information is poorly designed, too verbose, or filled with too much legalese. And yet, efforts to make such information more accessible have consistently failed, whether implemented by researchers, designers, or companies themselves. What makes “informed consent” any different? In my own research practice, I’ve asked hundreds of people to fill out consent and assent forms that have been approved by IRBs; almost none of my informants have ever read the first paragraph or asked any questions when I explain the methodology. So, when it comes to ethical commitments, are researchers looking for informants to be informed or are we looking for them to perform the act of appearing informed?
I’m not necessarily saying that Facebook made the right trade-offs with this study, but I think that the scholarly reaction of research is only acceptable with IRB plus informed consent is disingenuous. And I think that we do ethics a disservice when we presume that the standards in academia are truly best practices.
Algorithmic manipulation of attention and emotions
There is no doubt that messing with people’s emotions can have significant negative consequences for those struggling with mental health issues or contemplating suicide, although the effects of media on behavior are controversial among scholars. Still, the potential for harm is what prompts researchers to engage in the procedures that they take – informed consent, transparency of manipulation post-experiment, etc. But what about the potential harm of negative content on Facebook more generally? Even if we believe that there were subtle negative costs to those who received the treatment during the one-week period, this manipulation is not a stand-alone research act. Facebook algorithmically determines which content to offer to people every day. If we believe the results of this study, the ongoing psychological costs of negative content on Facebook every week prior to that one-week experiment must be more costly. How then do we account for positive benefits to users if Facebook increased positive treatments en masse as a result of this study? Shouldn’t the benefit to all users of learning this information outweigh the small cost to a subset of users in a weeklong study?
I don’t want to suggest that the only public good that Facebook should aim to produce is user happiness. If we take implementation of this study to its logical conclusion, what would it mean that you’re more likely to see announcements from your friends when they are celebrating a new child or a fun night on the town, but less likely to see their posts when they’re offering depressive missives or angsting over a relationship in shambles? If Alice is happier when she is oblivious to Bob’s pain because Facebook chooses to keep that from her, are we willing to sacrifice Bob’s need for support and validation? This is a hard ethical choice at the crux of any decision of what content to show when you’re making choices. And the reality is that Facebook is making these choices every day without oversight, transparency, or informed consent, regardless of what they choose to publish. This type of practice produces what Eli Pariser (2011) has called “the filter bubble,” and such prescribed environments come with their own social costs.
Part of the outrage around this experiment stems from the fact that many Facebook users do not realize that Facebook actively alters what content users see based on what is more likely to be consumed, clicked on, “liked,” commented on, or otherwise engaged with. Users only see a mere fraction of the content that their network produces, in part because Facebook has learned that an onslaught of content is overwhelming to users. Because they are building a product with delight and “stickiness” at the core, they actively curate content through personalization algorithms that are designed to give users what Facebook thinks that they most want. Users may dislike this practice and companies like Twitter have chosen not to obscure posts out of a competitive interest in differentiation, but algorithmic manipulation of the news feed is at the center of Facebook’s product. What makes talking about them difficult is that the very algorithms that determine what users see aren’t public and the fact that Facebook does make decisions on users’ behalves is not something that is widely advertised. Facebook has opted for a paternalistic approach over giving users control. They are not the only company to do so; Apple’s founder Steve Jobs was notorious for such a mantra during his second reign.
Facebook is not alone in algorithmically predicting what content you wish to see. Any recommendation system or curatorial service is prioritizing some content over others. Most sites, from major news media to social media, have some algorithm that shows you the content that people click on the most. This is what drives media entities to produce listicals, flashy headlines, and car crash news stories. Long before the internet, major media concluded that fear and salacious gossip sell papers. Stories about child abductions, dangerous islands filled with snakes, and celebrity sex tape scandals are often the most clicked on, retweeted, and favorited. Thus, an entire industry has emerged to produce click bait content under the banner of “news.” Increasingly, companies are turning to research produced in psychology, communication, and neuroscience to imagine new techniques for manipulating people’s attention to and attitude toward their products.
As Tarleton Gillespie (2014) points out, consumers expect different things from news media curators and social media websites. While the public has come to understand that editors decide what goes on the front page of a newspaper or at the top of the hour in a TV news show, they expect social media providers to connect them with their friends in the same way that they expect telephone companies to patch through their calls. Just as they don’t expect AT&T to decide that they’ve been speaking to one friend too much and refuse to make the connection, they don’t think of Facebook as having any right to determine whose content they should see once they’ve opted to signal the other as a “friend.”
Given the disconnect between what people assume Facebook does and what it actually does, what does accountability look like? Part of what makes this study relatively unique is that it is one of the few situations in which Facebook has publicly identified one way in which they manipulate the news feed beyond the assumed marketing-oriented decision-making. What the reaction to this study suggests is that the practice of algorithmic manipulation is acceptable, but being transparent about the process is not.
Ethics, power, and accountability
Ethics are contextually situated, shaped by a society’s understanding of morality and social conduct and rooted in the norms, practices, and cultural logic of any community. Concerns over ethics often raise broader questions about how society is constructed and the decisions being made by those of high status and power.
The Facebook “emotional contagion” study raises many questions for researchers about methodology, findings, oversight, and the relationship between industry and research, but what’s really at stake in the public outrage around the study has little to do with the study itself. There is a growing amount of negative sentiment towards Facebook and other companies that collect and use data about people. This study provided ammunition for people’s anger about “big data” because it’s so hard to talk about harm in the abstract.
For better or worse, many people want to believe in a Facebook provided by a benevolent dictator to enable people to better connect with others. This was the promise of the platform when people started joining en masse. But, like most social media sites, Facebook was produced by a company who needed to find a way to sustain itself, its employees, and its investor and, more importantly, grow. As early internet entrepreneur Ethan Zuckerman (2014) has argued, advertising is the internet’s “original sin.” In order to feed its growth, Facebook began using its treasure trove of user data to compete for advertising dollars. As a public company that needs to increase revenue with each quarter, Facebook must find more effective and efficient ways of profiting off of user data.
People have an abstract notion of how advertising operates, but they don’t really know, or even want to know. They just want the hot dog to taste good. Whether it’s couched as research or operations, people don’t want to think that they’re being manipulated. So when they find out what soylent green is made of, they’re outraged. This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook and other major social media sites run their businesses, operates their systems, and make decisions that have nothing to do with how its users want those companies to operate. Fundamentally, as with all ethical conundrums, it’s a question of power.
Media and information companies have the ability to determine what people see. While journalists have long discussed the power of coverage and debates over “net neutrality” reveal the political nature of controlling the transom, research into “media effects” and algorithmic politics is in its most nascent days, and filled with significant methodological and conceptual gaps. Given the reality of our capitalistic society, companies are haphazardly experimenting with what is possible, driven by market-based competition, fuzzy interpretations of science, and a desire to increase profit. Information companies aren’t regulated in the same way as the pharmaceutical industry. Social media companies don’t do clinical trials before they put a product on the market. They can attempt to manipulate their users all they want without public oversight or methodological rigor. And as the public, we can only guess what the black box is doing.
There’s a lot that needs reformed here. Scholars often begin with questioning the very foundation of contemporary capitalism, but I think that there are more pragmatic conversations that can be had without seeking to dismantle the entire system. We need to figure out how to have a meaningful conversation about corporate ethics, regardless of whether it’s couched as research or not. It is not fruitful to claim that companies are inherently unethical because they subscribe to capitalism. Nor is it so simple as suggesting that a lack of a corporate IRB or a lack of a golden standard “informed consent” means that a practice is unethical. Almost all manipulations that take place by these companies occur without either one of these – and without public knowledge – even though company employees are regularly debating ethics in their decision-making process. Engineers aren’t ethically clueless automatons and companies don’t ignore ethics in the pursuit of profit; ethical issues play a significant role in every product development project I’ve ever experienced or observed. Ethical oversight isn’t easy, but it also can’t be formulaic if it is to evolve with companies as they change. We need a socio-technical model of ethical oversight that creates a conversation between those doing the manipulation and those who are producing the data being manipulated. Such a model must be co-constructed by companies and researchers, not simply imposed by outsiders who think that they understand corporate decision-making. More challengingly, there is no one-size-fits-all solution for all companies. Size, industry, and organizational structure all shape how ethics can be infused into the process.
I’m glad that this study has prompted an intense debate among scholars and the public, but I fear that it’s turned into a simplistic attack on Facebook over this particular study rather than a nuanced debate over how we create meaningful ethical oversight. The lines between research and practice are always blurred and information companies like Facebook make this increasingly salient. Any effort to regulate a company’s practices by creating artificial distinctions between research and business will not only fail to provide meaningful oversight, but will undermine the true goals of increasing ethical practices.
No one benefits by drawing lines in the sand. We need to address the problem more holistically, and this won’t be easy. In the meantime, the public does need to hold companies accountable for how they manipulate people across the board, regardless of whether or not it’s couched as research. For such efforts at public accountability to be effective, we must collectively recognize the realities of product development rather than basing our conversations on abstract assumptions about evil corporations. If we focus too much on this study or on Facebook, we’ll lose track of the broader issues at stake.
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
