Abstract
Social media platforms don’t just guide, distort, and facilitate social activity, they also delete some of it. They don’t just link users together, they also suspend them. They don’t just circulate our images and posts, they also algorithmically promote some over others. Platforms pick and choose.
Platforms matter. This is now, I think, becoming an established observation in social media research. For some reason, it remains tempting to study social dynamics on platforms while ignoring the platforms themselves, treating them as simply there, irrelevant, or designed in the only way imaginable. But recent work on the socio-technical dynamics, context-specific realities, and political economic dynamics of social media has made clear that platforms, in their technical design, economic imperatives, regulatory frameworks, and public character, have distinct consequences for what users are able to do, and in fact do.
So platforms matter . . . but that is not the end of the story. Even the best of this work, even in its richest understanding of the technical, economic, and political contours of social media platforms, tends to overlook a crucial additional element. Social media platforms don’t just guide, distort, and facilitate social activity—they also delete some of it. They don’t just link users together; they also suspend them. They don’t just circulate our images and posts, they also algorithmically promote some over others. Platforms pick and choose.
This is, of course, something we “know” already. Of course social media platforms police their content: I remember some kerfuffle about Apple removing sexy apps a few years ago. Of course Twitter suspends users: Aren’t those misogynist trolls terrible? Of course YouTube algorithmically promotes some of its content: That is why their front page looks the way it does. It seems that on about a yearly basis, the tech press nails Facebook for deleting a photo that appeared to include an exposed boob.
However, this familiarity obscures some important issues. First, many users do not know all that much about the deliberate interventions platforms make. That is not to say that users are dupes—most of us “savvy users” don’t understand these processes as well as we should either. What I mean is most users don’t encounter the rules imposed by platforms—most have little reason to read them, and most don’t have anything deleted. And though users may be aware that there are algorithms inside their favorite social networking site or search engine, most know very little about how they work. Were this a real part of our conception of and discourse about these platforms, we might approach them differently, expect different things from them, legislate them differently—and study them differently.
Furthermore, platforms regularly downplay these interventions, except in specific moments when it is beneficial for them to trumpet them. When Instagram and Pinterest were accused of hosting pro-anorexic images, they loudly announced their new policies against it. (Whether these policies were effective was a much quieter discussion.) When advertising partners want assurances that their posts will be seen, the platforms show off how sponsored posts are designed to persist longer than regular ones. But beyond that, these companies prefer to emphasize their wide-open field of content and their impartial handling of it.
This constant intervention is an important and under-examined part of what platforms do. We study the topics of discussion and dynamics of sociality that flourish online, but we don’t as often study the topics and dynamics that are asked to leave, or never show up because they know they will be deemed unacceptable. We study what content these platforms circulate, but we too often describe it as what “returns” as search results or “goes viral,” rather than seeing them as the result of strategic actors selecting and assembling user content into a particular composite. This may be a gentler intervention than a newspaper editor deciding what is a front page story and what isn’t worth reporting at all, but it is selection nonetheless, and it matters in many of the same ways.
Of course, the user suspended from a platform has not been silenced entirely, which means that it is hard to call this censorship in the strict sense. The web beyond these platforms still offers a more loosely regulated home for controversial content. But it is a question about when some content is forbidden to appear where people expect it to be, in the massive online spaces where audiences can be built. It is why the common admonition “if you don’t like it here, just leave” is insufficient when it comes to culturally and politically contentious speech. While it is not unreasonable for a platform to want to set rules and install algorithmic mechanisms for highlighting content for its users, things change as these platforms grow. Scale and centrality make a difference; once a platform becomes massive, new kinds of expectations emerge, and new kinds of obligations arise. But we will never identify what these obligations are, or should be, until we recognize that there is selection and deletion going on, all the time.
This also has implications for how we conduct social media research. A savvy researcher will take care when, for instance, making claims about contemporary political discourse by collecting all the tweets that used one particular hashtag. Of course, the researcher is already excluding private tweets, as well other relevant discussions that did not coalesce around that one hashtag. Good methodological caveats. But, Twitter also deletes tweets and suspends users. Some things may not have been said at all by users anticipating those prohibitions. Other tweets deemed popular were displayed in a larger font, or added to Twitter’s email prompts sent out to some users; the hashtag term might have trended, at some point and only in some places. How will these interventions be accounted for, as absent elements or relevant dynamics in the corpus of data?
I don’t mean to be finicky. Or maybe I do. We might think that at the scale of “big data,” these perturbations are small enough to be ignored. After all, plenty of tweets drop out of the data in other ways, through sampling, choice, error, and so forth. But let us be concerned, anyway, about the fact that this corpus is not just the product of people’s participation but has also been crafted by the platform, according to the logic of its algorithms, the imperative of its business model, and the enforcement of its community guidelines.
Recognizing that social media platforms shape the social dynamics that depend on them allows us to draw connections between the design (technical, economic, and political) of platforms and the contours of the public discourse they host. Remembering that they are private businesses reminds us that some of their decisions will be craven, or financially motivated, or constrained in ways even they cannot recognize. But we have not done justice to the fact that like newspaper editors and network broadcasters (and, in important ways, unlike them), social media platforms pick and choose, based on explicit and implicit norms, cultural presumptions about taste and etiquette, at the behest of offended users or concerned lawmakers, and in ways that best suit their economic aims. If we tried on this idea, even if it is overstated, we might shed the compelling myth that these are information flows that happen to be filtered, and instead see our information as only raw material from which platforms assemble an information product for us: a feed for which some content is chosen, some is given prominence, some is discarded, and some is expelled. That is to say, platforms intervene, and the public culture that emerges from them is, in important ways, the outcome.
Footnotes
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
