Abstract
Discoverability is a concept of growing use in digital cultural policy, but it lacks a clear and comprehensive definition. Typically, discoverability is narrowly defined as a problem for content creators to find an audience given an abundance of choice. This view misses the important ways that apps, online stores, streaming services, and other platforms coordinate the experiences of content discovery. In this article, we propose an analytical framework for studying the dynamic and personalized processes of content discovery on platforms. Discoverability is a kind of media power constituted by content discovery platforms that coordinate users, content creators, and software to make content more or less engaging. Our framework highlights three dimensions of this process: the design and management of choice in platform interfaces (surrounds), the pathways users take to find content and the effects those choices have (vectors), and the resulting experiences these elements produce. Attention to these elements, we argue, can help researchers grapple with the challenging mutability and individualization of experience on content discovery platforms as well as provide a productive new way to consider content discovery as a matter of platform governance.
On January 8, 2016, rapper Kanye West released the song “Real Friends” on the streaming music platform SoundCloud. A new track from one of the most famous musicians in the world, “Real Friends” immediately attracted a massive audience—a fact that ended up having a profound effect on someone totally uninvolved with the song. Once West’s track ended, those users with SoundCloud’s “Related Tracks” autoplay feature enabled immediately heard another song, “Lowkey,” by Kansas City teenager Rory Fresco, which had been listened to fewer than 5,000 times at that point. A day later, “Lowkey” had racked up 168,000 plays; today it has more than 6 million and Fresco is signed to major label Epic Records and working with legendary producer Timbaland (Randle, 2016). Fresco’s luck would have been different just a few months later when SoundCloud changed their algorithm to suggest tracks based on users’ individual listening habits rather than finding a “related” song for each uploaded track and playing it to all users (Bortignon, 2016).
SoundCloud playing “Lowkey” to Kanye fans highlights a key issue for contemporary cultural production: how users and producers navigate the matrix of businesses, apps, and networks that mediate engagement with cultural goods. The rise of streaming as service, the move to the cloud, and overall media convergence have concentrated online content discovery within a few major companies, generally called platforms (Holt & Sanson, 2014; Nieborg & Poell, 2018). Leading cultural institutions have begun to address these changes in terms of their impact on the “discoverability” 1 of cultural products. According to the Canada Media Fund, discoverability is a problem for cultural producers competing in the online attention economy (Desjardins, 2016). Indeed, discoverability is a factor for all online “content,” an admittedly broad heuristic term for nearly everything on the Internet. 2
At a moment when the meaning of discoverability is in flux, we distinguish it from mere marketing and elaborate its use as a concept in media studies. We argue that discoverability concerns how
Medium and Flow: From Remote Controls to Recommender Systems
Our approach to discoverability builds on the concept of flow developed by Raymond Williams (1974) in his influential studies of television. Williams introduced the concept of flow to analyze media as a cultural form. For Williams, television channels, like their predecessors in radio, created sequences of programs and advertisements within a fixed schedule, unlike a single book that can be picked up and read at any time. And unlike a library where someone can find books at, roughly speaking, any time, access to television programs is programmed and scheduled by networks. This planned flow resulted from meaningful decisions about what should be on the air and when, and the experience of watching television was arranged around this fixed programming. Viewers had only limited influence over the program; they could choose to watch or not at certain times or they could change the channel.
While the catalogs of streaming services like Netflix or Spotify might suggest a return to the relatively stable freedom of choice offered by libraries, we argue that, because many platforms actively attempt to guide how and when users discover content, Williams’s concept of flow is still relevant. Ramon Lobato (2017) compares the schedule of a television broadcaster with the catalog of a streaming video platform. He notes, “the two objects are comparable in the sense that both index the range of content available through a particular distribution system, and thus delimit—without determining—the likely range of textual experiences available to audiences through that system” (Lobato, 2017, p. 3). Building on Lobato’s distinction, we argue that platforms coordinate what Jane Bennett (2010) calls distributive agency. Humans interact with algorithms, artificial intelligence (AI), and bots through interfaces, apps, screens, and other social media to navigate individualized content flows. This coordination alters the engagement of content pulled from a vast but not limitless digital archive of television shows and films, posts and status updates, and the hearts and souls of artists trying to succeed online. In other words, unlike a linear television channel, online platforms do not limit audiences to a binary choice of consuming or not. Instead, these platforms provide a dynamically unfolding, personalized architecture of choice (Yeung, 2016)—importantly, one whose orientation toward profit is distinct from the public service mission of a library—within which users discover content.
Platforms differ in their coordination of content discovery. Each provides a unique, albeit related, way to discover content that depends on the design of interfaces and algorithmically generated components, such as recommendations. To understand these differences, we elaborate three concepts—surrounds, vectors, and experiences—to examine how platforms coordinate discoverability. Surrounds refer to the ways that platforms arrange choices on or between screens. Vectors refer to the interactive pathways we take through data, guided by software. Taken together, surrounds and vectors shape the contours of how users find and consume pieces of content, which influences their
How discoverability is coordinated matters for both users and creators. From the user side, discoverability refers to the varying ways individuals consume content and the value of these different experiences. The attention economy is one way to monetize the experiences of using a platform (along with data mining and paying for access). The outcomes of discoverability—the content audiences deem worthy of paying attention to—also provide another way to understand the value of platforms to their users. For content creators, discoverability affects how they find an audience for their apps, games, music, or movies in the Apple and Google stores; books and other cultural commodities on Amazon; films and television shows on Netflix; video games on Steam and Twitch; music on Spotify; videos on YouTube; or one of the varieties of content on social media platforms like Facebook, Instagram, and Snapchat.
Surrounds: The Environments of Discovery
We borrow the concept of the surround from Fred Turner, following Hye Jin Lee and Mark Andrejevic (2014). Turner introduces surrounds to describe the Museum of Modern Art’s 1955
Just as Turner analyzed the exhibit’s floor plans to understand its controlled presentation of choice, attending to surrounds entails studying the economy of screen space. What elements take up the most space or are otherwise emphasized? How are choices positioned in relation to each other? For example, Richard Grusin (2000) in a study of “screen real estate” argues that part of Microsoft’s monopolistic control over computing extended to the desktop and what icons it included by default. These questions of interface extend from the desktop to a platform’s individualized home page, which is a mosaic of content options, buttons, and search bars that functions like a surround with multiple, curated choices. Journalism studies has recognized the importance of news websites’ home pages, comparing the selection of what appears on them with editorial decisions about the print version’s vitally important front page. One can say that, heuristically, the location of an item on the home page signifies its importance. Typically, items to the left and higher up have more importance than items on the right and lower down (A. M. Lee, Lewis, & Powers, 2014; Zamith, 2016). Advertisers also recognize the value of the home screen, calling it the “first impression unit” in recognition of its loading being an important moment to introduce or advertise content.
Surrounds extend beyond one screen. They are configurations of first and second screens, apps, desktops and dashboards, and, increasingly, digital assistants like Amazon Echo or Google Home. Our concept of the surround extends the work of Kathleen Oswald and Jeremy Packer (2012), who broadly define flow in the wake of new media as the “material set of practices, techniques and technologies that integrate individuals into the temporal and spatial dynamics of contemporary economics and cultural expectations” (p. 277). They propose a flow 2.0 where “the media environment is no longer devoted to keeping viewers fixed on one transmission, but rather
Those constraints are dynamic. Surrounds can adapt to an individual user’s device, browser settings, and window size, and many reconfigure in response to the results of continual, collective testing of different design and content choices. As a result, our screens are the fluid products of responsive layouts and optimization software recalculating ratios of screen real estate and priority.
Netflix provides an exemplary surround. No doubt it will have changed after we’ve written this description, but in late June 2017, two thirds of Netflix’s home page as seen on a personal computer was used to promote its own shows (at least on our screens). Where some platforms might sell access to this prime screen real estate, Netflix seems to use this space principally to promote its own content. In the bottom third, users will usually find a row of tiles for content they have manually added to My List. As users scroll down past the initial screen, they will see another row advertising Netflix’s original content with tiles three times the size of those in the My List row.
Only relatively far down the page does Netflix begin to “recommend content,” and then only content that’s “trending” or has been recently added. Much further down, Netflix begins to suggest content based on what a user has previously watched. Attention to this configuration provides insight into the logics behind content discovery on the platform, while changes in the surround also reflect changes to Netflix’s business model. As much as Netflix’s corporate discourse talks about using big data to recommend the perfect content to its subscribers, the prominence of its own original content in viewers’ recommendations suggests a routine model of vertical integration (Hallinan & Striphas, 2016). Furthermore, as Hallinan and Striphas note, Netflix’s significant financial investment in original content was initially predicated on its unique capacity to leverage user data to guide content development rather than recommendation (in that sense, Netflix’s original content comes pre-recommended).
Further inquiry into surrounds might build on research in journalism studies examining the composition of front pages (Zamith, 2016). Where Zamith questions the relationship between audience metrics and editorial decisions, a survey of user interfaces across platforms might reveal how different screens (e.g., phones or tablets) create different economics of space. Items on the front page could also be coded by their content type and relative dimensions to describe the ratios of attention. In the rough example of Netflix above, its own content occupied three times as much screen real estate as user-selected content. Does that pattern hold true for other platforms? Cross-platform content analysis with multiple participants might offer a better understanding about the nature and design of flow today (cf. Feuz, Fuller, & Stalder, 2011).
Vectors: The Processes of Discovery
If surrounds compose choices on screens, then vectors compute choices and interconnect surrounds. Like wind passing through a mobile, vectors coordinate the interactions between humans and nonhumans animating surrounds. Vectors populate surrounds with recommendations based on probabilistic guesses personalizing content choices based on likes and ratings or clicks and time on page (Schulte, 2016; Seaver, 2012), recommendations based on what we’ve watched (Striphas, 2015), tailored advertisements (Turow, 2008), and decisions about what’s trending or popular, to name a few of the common widgets that fill a screen (Gillespie, 2014). 3 By selecting from a category on Netflix like “Teen TV for BFFs” or “Visually-striking movies,” listed in the appendix, users trigger a vector that presents a new surround and signals a preference to Netflix. These vectors have become increasingly important. According to YouTube’s own data, recommended videos account for 70% of viewing time on the site (Solsman, 2018).
The idea of the vector comes from early studies of the World Wide Web’s digital geography. Rob Shields (2000) describes hyperlinks as vectors through which users navigated “the labyrinthine environment of data” (p. 147). User directories, blogrolls, and webrings created a “vectoral space” of possible links, which defined the constrained freedom of early web browsing. Today, vectors unfold on content discovery platforms through ongoing interactions between users and algorithms, AI, bots, and other software agents. The Canada Media Fund, one of Canada’s largest funders of cultural goods, lists chatbots, voice recognition, and AI as important changes to viewing in its 2018 trends report (Briceno, Tanguay, Dubé-Morneau, & Engelberts, 2018).
These nonhuman guides, from Clippy to Siri, remind us that flow is co-constituted by users and platforms. James G. Webster (2014), drawing on Anthony Giddens, argues that this mutually constitutive relationship defines the digital media environment, where structures set limits on what individual agents can do but also change in response to those agents’ actions in a feedback loop. YouTube, for example, uses a deep neural network to recommend content. These nonhumans present content by analyzing data points to generate a list of possible videos then scoring them on probable relevance to the user (Covington, Adams, & Sargin, 2016).
We can characterize vectors as computer scientists do: by direction and magnitude. What are the units of calculation for discoverable content (e.g., songs, albums, playlists, groups)? Vectors direct users through surrounds, moving from the home screen to microgenres on Netflix or recommended playlists on Spotify. Once activated by users, vectors such as recommendation algorithms populate these screens with related content units. In this way, vectors have a magnitude or velocity as well. How quickly does a vector change the scale or scope of content? A Google search has a tremendous magnitude to immediately narrow content if you know the right query. 4 Platform studies often focus on the horizon-shrinking effects of personalized filter bubbles (Pariser, 2011), but certain vectors—such as lists of trending topics on Facebook and Twitter—aim to capture the widest terrain and stand in for public, not personalized, interests (Gillespie, 2014).
Vectors coordinate active user participation through trends, popularity metrics and indicators, and recommendations; they also encourage users to become part of the process of content discovery. Fans, followers, and friends contribute labor to the dissemination of media by reposting and interacting with content, easing its spread, and increasing its visibility. Thoughtful work has already described the user-centric processes of the networked audience (Marwick & boyd, 2011), who distribute content (Braun, 2015), spread it (Jenkins, Ford, & Green, 2013), help it go viral (Nahon & Hemsley, 2013), or make memes out of it (Milner, 2016; Milner & Phillips, 2017). Online platforms typically do not explain how they process user signals in detail, but committed users appear to have become adept at reverse-engineering the algorithms involved in discoverability, as Becca Lewis (2018) reveals in her study of the alternative influencer networks that promote and popularize extremist content on YouTube.
Vectors allow users to passively navigate content too. Passive vectors include interfaces that, much like traditional broadcasting flows, automatically select the next content item, such as YouTube’s and Netflix’s automatic play features or the radio mode for music streaming services. Content streaming services that offer the same basic functionality may nonetheless provide different passive vectors, as we can see when we compare, for example, Netflix with Canadian video streaming service Crave (owned by the major telecommunications corporation Bell). When an episode ends, Crave superimposes an unobtrusive icon with a 45-s countdown timer in the upper right corner while the credits take up the full screen, whereas Netflix shrinks the credits to a small box in the upper left of the screen, gives prominence to the soon-to-play next episode in the lower right, and allows ambivalent bingers only 15 s to change their minds.
Due to their dynamism, vectors can be more difficult to capture than surrounds and are likely never entirely apparent to the user (Langlois & Elmer, 2013). Web studies and digital methods provide helpful approaches. By following links, researchers, participants, and even bots can trace vectors through content (Rogers & Marres, 1999). Ethnography or auto-ethnography might offer another way to map the flows created by vectors (Beaulieu, 2004; Gehl, 2014). Trace analysis, where digital records of a subject’s Internet history are used to enhance interviews, could reveal the subtle cues that guided a user’s journey toward content discovery (Dubois & Ford, 2015). With any of these methods, the goal is to try to reverse engineer the processes working below the interface and to think through the experience of using these platforms over time and at different scales.
Feeling Flows: Experiences of Content Discovery
The combined work of surrounds and vectors encourage certain probable user experiences. Flows cultivate user behaviors—anticipating, pre-empting, and accommodating foreseeable requests. Ideally, user experience or interaction designers often gauge their work by designing optimal experiences for imagined users to find content (Nagy & Neff, 2015). Here we see a connection to the branch of psychological research that also uses the term “flow” to indicate an optimal state of consciousness, one that makes people the happiest (Csikszentmihalyi, 1990). However, as Natasha Dow Schüll’s analysis of addictive gambling machines reveals, technologies can be designed to provide less beneficent flows as well. Gambling machines, according to Schüll (2012), work to facilitate a compulsive dependency in users by helping them to enter “the zone” or “the world-dissolving state of subjective suspension and affective calm they derive from machine play” (p. 19). As opposed to the goal-accomplishing movement from, for example, landing page to product description to shopping cart to checkout that a commercial website designer might envision, the flow that gambling machine designers seek is a closed loop—known as the ludic loop—within which addicts achieve a state of equilibrium through repetition.
Whether addictive or delightful, the distinct, finite options presented by surrounds and vectors enable certain experiences. These experiences are co-created through user desires and the way that interfaces are optimized to encourage certain experiences; platforms can point, nudge, or compel users in distinctive ways. Below we sketch out a speculative and preliminary taxonomy of three flows in the hope that they inspire more discussion about specific platform experiences.
Rabbit Holes
Rabbit holes refer to flows that follow people’s curiosity through a network of interconnections and correlations, almost conspiratorially (Sauter, 2017). The popular webcomic
Recent controversy around YouTube’s recommender system—a vector whose combined passivity and prominence magnifies the Wikipedia “problem” immensely—illustrates the way platforms can nudge users down rabbit holes. Unless the feature is disabled, when one video ends, YouTube autoplays the next recommended video. In addition, the platform’s surround situates the user’s selected video next to numerous other recommendations, privileging the influence of the recommender and its picks in YouTube’s surround. All around the viewer are invitations to keep watching, to go deeper, and to discover more. These automated recommendations have been accused of leading children to watch graphically disturbing content, guiding the politically curious to extremism, normalizing fringe conspiracy theories, and incentivizing new genres of clickbait and spam (Bucher, 2018; Lewis, 2018).
These recommendations raise important questions about YouTube’s accountability and whether ranking videos primarily by likelihood to watch might create a fast (or high magnitude) route to the most sensational content. If, as one Google publication explains, YouTube’s “deep collaborative filtering model” predicts “expected watch time” (Covington et al., 2016), then does arousing or conspiratorial content perform best? Eschewing the kind of intense moderating that Wikipedia relies on, YouTube currently appears to be unbothered by the prospect of profiting from users being nudged toward conspiracy theories. Indeed, through their recommendation-heavy surround and engagement-focused vectors, the platform optimizes problematic rabbit holes while also incentivizing producers to keep digging more of them (Lewis, 2018).
Gorking Out
Gorking out is, according to Merriam-Webster’s dictionary, “medical slang usually disparaging” used by doctors to refer to “a terminal patient whose brain is non-functional and the rest of whose body can be kept functioning only by the extensive use of mechanical devices and nutrient solutions” (“Gork,” n.d.). The sense of maintaining homeostasis in that definition draws out a similar set of machines at work on content discovery platforms that limit decision making or cognition. Exhausted workers can be entertained by screens without having to exert themselves or confront decision fatigue.
Binge watching best exemplifies the gorked-out experience. In Netflix’s case, the autoplay feature is a passive vector that allows content to change without intervention, saving humans the cognitive labor of deciding whether or not to watch the next episode or hunt for something new. The surround limits the choices on screen as well. Netflix users, last we checked, could only stop watching or select one or two related content choices. The only obstruction to watching another episode is self-control. As Netflix CEO Reed Hastings told the
Carousels
Buffeted by a slew of criticisms, in early 2018, Facebook CEO Mark Zuckerberg (2018) announced, “I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions.” Under this new approach to discoverability, Facebook supposedly would no longer optimize for engaging users at any cost but would instead discriminate among different modes of attention, seeking to cultivate affects of comfort and closeness (cf. Paasonen, 2016). The platform’s surround will now filter out distressing news from third parties while favoring familiar vectors weighted for updates by friends and family. Even if it’s merely a cynical public-relations maneuver, Zuckerberg’s announcement speaks to the value of different discoverability experiences and it serves as a rare acknowledgment of Facebook’s immense power to shape them.
Facebook, according to these claims, is changing its surround and vectors to create an experience we call a
This list of experiences is only partial, but we hope it offers a worthwhile starting point for understanding the experiences co-created by users, surrounds, and vectors. In practice, nothing is so neat. Carousels also use endless scrolling, and you can get lost in rabbit holes while looking for content on Netflix (
These experiences are connected to the political economy of discoverability, as platforms must facilitate experiences that keep people coming back (or maximize engagement) but are also profitable. By comparison, television largely provides one kind of experience, a continuous stream of content to be watched. By organizing the schedule or program, television stations profited by attuning the privatized gaze of home viewers. Commercial breaks capitalized on this gaze, creating slots in the flow that were sold to advertisers (Wu, 2016). Dallas Smythe (1981) called these parcels of aggregated attention the audience commodity. Content discovery platforms coordinate flows in similar ways. Websites sell the first image that loads on the page as the valuable first-impression unit. YouTube plays advertisements before their videos begin and even interrupts longer videos with commercial breaks. Free-to-download mobile games in search of revenue also program gameplay in ways analogous to broadcasting flows. Routine breaks in a game—crashing, dying, or the end of a turn—provide an opportunity to insert ads into the flow of play (Evans, 2015; Nieborg, 2015). These disruptions are undesirable enough that they create an alternative revenue stream for platforms. Subscription-based services like Netflix and Spotify Premium are luxuries compared with ad-supported alternatives; their uninterrupted flows are valuable enough to pay for.
These questions of profitability pose another question: What defines an
A New Kind of Media Power: Who Optimizes Discoverability?
Although agency is distributed on content discovery platforms, it is not distributed equally. As the suddenly famous rapper mentioned in the introduction can attest, for cultural producers trying to connect with an audience in a marketplace teeming with options, the discoverability of their creations is vital. A scene three quarters through the sleeper hit documentary
Refenes’s experience speaks to the unequal, often agnostic or agnostic interactions between users, creators, and platforms (cf. Crawford, 2016). Greater agency over discoverability is a power relation, closely related to economic and cultural power. That power augments what McKenzie Wark (2006) names the vectoralist class, who hold sway over capital and labor by controlling flows of information. We can identify the owners of platforms and content stores as members of this class, and some of them are the richest and most powerful corporations of our time, including Amazon, Apple, Alphabet/Google, and Facebook. Thus, control over discoverability—including control over the databases that store vast amounts of information about our cultural preferences—represents a key form of platform power. Netflix, for example, has used its composition of surrounds to put its content at the top. Amazon also has been accused of ranking affiliated products higher in its search returns (Angwin & Mattu, 2016). However, the people who work on discoverability are not exclusively powerful members of a corporate cabal; they are also marginal employees working in a cubicle somewhere (or a basement, as in the case of Facebook’s now unemployed News Team). Recognizing the influence of discoverability then offers a first step to understanding the important work done by graphic designers, interface designers, programmers, and company store employees as well as the organizational goals that guide them.
Surrounds and vectors inevitably create opportunities for hacks and exploits. The vectoralist class are opposed by the producers of information or what Wark calls the hacker class. These hacks can be benign, humorous, and increasingly political. The hacker group Anonymous enjoy gaming online polls for fun (or LULZ). Jokes aside, the hacker class might be as unaccountable as their opponents. Search engine and content optimization companies implicitly promise to game platform discoverability, and there seems to be a dark art to engineering a trending story. Recent accounts of Russian bots and the “meme magic” of alt-right political movements speak to growing concern about who might hold the balance of power in content recommendation (Marwick & Lewis, 2017).
Just as the broadcasting tradition became associated with an era of limited effects, discoverability must be qualified as well. Some steps have been taken in Canada to understand the issue and point out future horizons for research. The Canada Media Fund’s second report on discoverability helps situate the limited influence of algorithmic recommendation in contrast to YouTube’s internal reporting mentioned above. According to a 2016 survey conducted by MTM, only 4% of Canadians aged 18 and older identified “smart suggestions” as their main method of discovering television content. Much more popular were recommendations from friends (27%), TV and radio commercials (17%), channel guides (11%), and channel surfing (10%). From this perspective, algorithmic recommendations do not seem to hold much sway with consumers. Instead, more attention perhaps should be given to how a platform’s front page surround functions as an advertisement, as well as a guide, that influences users’ content choices. On the contrary, friends have an even bigger influence than advertising or algorithms, and the persistent importance of word of mouth is another good reminder of the limited effects of discoverability. Friendship, however, likely does not trump the platform entirely. After all, whether posts and recommendations from friends appear in one’s social media surround depends on algorithmic sorting and vector-based personalization too. These posts face the same challenge of being discovered or rather promoted by, for example, Facebook’s News Feed algorithms.
Conclusion: Researching and Regulating Discoverability
Our framework for discoverability should inspire further analytical and methodological research into the workings of specific platforms. Surrounds point our attention to the screens—first, second, and third—that present choices to the user. The spatial focus of the surround complements the temporal questions raised by vectors. Users interact with surrounds and vectors to co-program flows, leading to the experiences of rabbit holes, carousels, and gorking out. A next step is to apply these concepts in case studies of different platforms. Second, could attending to surrounds and vectors lead to better tracking of what’s recommended, trending, and other cultural analytics?
More broadly, our framework outlines the significant influence of content discovery platforms on how audiences engage with cultural goods. Discoverability is part of a history of media studies concerned with power, concentration, and accountability in broadcasting systems (Babe, 1990). In particular, our use of flow questions the connections (and discontinuities) between platforms and broadcasting—a subject of ongoing discussions in media and cultural policy (Enli, Moe, Sundet, & Syvertsen, 2013; Lobato, 2017; Thorson & Wells, 2016). Connecting online platforms to broadcasting inspires us to think of vectors, surrounds, and experiences as regulatory problems not unlike television’s programming schedule. The platforms’ capacity to personalize engagement with cultural content essentially means that these corporations are creating an individualized, constantly changing cultural policy for each user—a radical break from the public accountability of broadcasting’s provision of choices.
Discoverability contributes a new facet to the growing problem of platform governance. 5 As we have seen with the case of YouTube, when left to their own devices, online content discovery platforms will self-regulate as little as possible and will avoid disclosing the influence of discoverability on our cultural experiences. Our framework complements studies of the affordances of social media platforms (Bucher & Helmond, 2017; Nagy & Neff, 2015), algorithmic accountability (Ananny & Crawford, 2016), and the conditions of platform production (Juul, 2010; Montfort & Bogost, 2009; Nieborg, 2015). At a time when platform governance has finally come under greater public scrutiny, our analysis of discoverability offers a way forward for better understanding the growing influence of platforms on global cultural flows.
Footnotes
Appendix
Acknowledgements
The authors wish to thank Nick Seaver, Robyn Caplan, Sara Bannerman, and Luke Stark for their generous comments in the preparation of this manuscript. Many of these ideas were developed through workshops at Data + Society. The authors wish to thank this institution for its support. All errors are ours alone.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the Social Sciences and Humanities Research Council of Canada.
