Abstract
Netflix actively fueled what is known as the myth of big data, promoting their recommender system and data-driven production as cutting-edge, all-seeing, and all-knowing. Today, however, the company is increasingly acknowledging the role of human expertise and creativity. In this paper I explore the strategic repositioning of Netflix from technology company to entertainment company, enabling them to be understood as both “data” and “gut.” This transformation is discussed as motivated by the increasing public criticism of data and algorithms and the company’s foray into original programing. More specifically, I examine how Netflix, in public-facing materials, discusses big data and how those ideas are taken up in public discourse. These sources disclose an assumption of opposing characteristics between data and human expertise and creativity. As a point of a larger critique, I comment on the limitations of this data and human opposition for thinking about Netflix and technologies at large.
The video streaming service Netflix was initially celebrated for its cutting-edge data-driven culture. This image was fueled not only by the Netflix Prize and stories of its use of data and algorithms for production choices as with its House of Cards series but also via disclosures that 80 percent of what people watch on Netflix originate in recommendations (Gomez-Uribe and Hunt 2016). While Netflix’s co-CEO and Chief Content Officer Ted Sarandos supposedly stated in 2015 that decision-making was 70 percent data and 30 percent human judgment (Wu 2015), three years later this logic was numerically reversed: in an interview he now claimed the proportions at 70 percent gut, 30 percent data (Adalain 2018). Interestingly, both characterizations fit Netflix. I would argue this is due to a strategic ambivalence the company perpetuates about its identity. It leads to discussions as to what Netflix is (see Lobato 2019, 20-1) and whether Netflix should be understood as a technology company or media company (Poell et al. 2021).
Within contemporary debates about big data we find both utopian and dystopian rhetoric. On the one hand, big data are said to provide new ways of knowing through abstractions and correlations not accessible to the human senses (Elish and Boyd 2018; Hallinan and Striphas 2016; Thylstrup et al. 2020) without “prejudgments and prejudice” (Mayer-Schönberger and Cukier 2013, 141). On the other hand, big data are seen to threaten privacy and contribute to invasive marketing (Boyd and Crawford 2012, 664). More concretely, in the context of discussions of Netflix, its recommender system has been discussed as either “an innovative panacea to the limits and biases of the human mind, or a corruption of humanistic criticism, taste and culture” (Frey 2021, 99). As Matthias Frey (2021) astutely points out, both positions reproduce a myth pertaining big data and its all-knowing qualities. Furthermore, it discloses the assumption of opposing characteristics and epistemic potentials between big data and humans (Stauff et al. forthcoming). In this paper, I examine how Netflix, in its public-facing materials, discusses big data and how those ideas are reproduced and transformed in popular discourse. Herein I identify a strategic shift in positioning over time that enables them to be understood as, to use Sarandos’ phrasing, both data and gut.
In the first part of this paper, I explore the myth of big data as a backdrop against which to discuss the strategic discursive repositioning of Netflix from technology company to entertainment company. This transition is discussed as motivated by the increasingly public criticism of data and algorithms and the company’s foray into original programing. Subsequently, I offer a discursive analysis of how Netflix’s TechBlog, Netflix Research, LinkedIn posts and interviews with representatives in the press between 2012 to beginning 2022 reflect on the company’s use of data. This analysis is guided by the question: How do these sources reflect on big data and what assumptions underline their statements? Here we see how data and humans are seen to have opposing characteristics and epistemic potential, but also how, over time, there is increasing concern for incorporating a “human touch” in their products and services. Having looked at Netflix’s own communication, I then explore how Netflix is understood by the public. More specifically, I examine well publicized incidents of criticism regarding Netflix. All sources are connected to a patterned and structured ways of speaking about data and algorithms and tend to reproduce the dominant opposition between big data and human expertise and creativity. Concluding the paper, I discuss how even though the role of big data has become more nuanced in narratives by and about Netflix, an equally unproductive discourse has emerged that limits our understanding of the streaming video service.
Focusing on how Netflix defines itself and is understood in terms of its use of big data, contributes to scholarship about the myth of big data as well as on television and platform studies. Examining the discussions by and of Netflix adds to an understanding of how data and algorithms are perceived in contemporary society. It furthermore reveals the gradual transformation of the relationship between data and humans in the wider discourse and how Netflix has responded by positioning itself as entertainment company. As a point of a larger critique, I comment on the limitations of this opposition for understanding media technologies and their impact on culture and society. While my argument connects with insights from others about Netflix (e.g., Frey 2021; Gilmore 2020; Lobato 2019), I add to this in multiple ways. First, by considering Netflix as broader than it’s recommender system. Second, by empirically supporting claims of their discursive repositioning as entertainment company. Third, by connecting Netflix’s strategic positioning to broader shifts in understandings of big data and algorithms and finally, by unpacking and criticizing the data versus human opposition that emerges in that discourse.
The Myth of Big Data
Scholarly publications and popular books on the potential of big data exhibit what is sometime termed a “Moneyball mindset.” As Caitlin (2021, 96) explains, this way of thinking generally contains four main tenets:
data is unencumbered by human “beliefs and biases”;
data is superior to intuitive and experiential forms of knowledge;
data and intuitive forms of knowledge are usually at tension with another; and
data, if used wisely, provides strategic advantages that can overcome an organizations deficits and challenges
danah boyd and Kate Crawford have termed such belief in data as a higher form of knowledge linked to objectivity, accuracy, and truth, a mythology. Netflix actively fuels the myth of big data to promote itself as essential and innovative (Frey 2021, 100).
In academic and popular discourse, ideas of “magic,” “divination” and “wizardry” animate imaginaries on big data and on AI (Crawford 2021; Elish and Boyd 2018; Finn 2017; Gillespie 2014). These ideas reflect the myth of big data. Describing their capabilities and functioning in terms of magic expresses their praising whilst underscoring their inscrutable nature (Selbst in Elish and Boyd 2018, 63). Problematically, these metaphors cloak the limitations of data and algorithms and the work needed to carry them out (Elish and Boyd 2018, 64).
Ideas of magic and wizardry surround Netflix too. In 2017 the scholar Ed Finn explained how “for Netflix, the brand is algorithmic, the magic computational, and the humans are kept carefully out of the spotlight” (p. 94). Writing about Netflix’s personal recommendation system, Alexander (2016, 86–7) states, “Since we can’t exactly tell why one title was recommended rather than another, we simply assume that Netflix knows us. The god resides in the machine, and it is unknowable and invisible as any other divine and unworldly entity.” Finn (2017) similarly comments on how even the chief engineer of Netflix’s recommendation systems cannot explain how its outputs emerge: this is a “beautiful illustration of the function that magic still plays in computation as a way to bridge causal gaps in complex systems” (p. 96). Such statements disclose how big data and algorithms are understood in terms of discerning patterns and knowledge not available to the human mind (Frey 2021; Hallinan and Striphas 2016; Thylstrup et al. 2020).
The Netflix recommender system is only one of the company’s products and services that sustains the myth of big data. In 2012 there was a lot of noise pertaining the fact that Netflix ventured into original programing. In an unseen move for the entertainment industry, they ordered two seasons of House of Cards without a pilot reportedly relying instead on data mining and algorithms. Suddenly, thanks to vast data collection and algorithms, it seemed possible to predict the success of films and series. This idea was pushed by Netflix and uncritically reproduced in popular press (Frey 2021, 109). Sarandos, for instance stated in interviews that the series was “generated by algorithm.” In a piece about Netflix for Salon staff writer Andrew Leonard (2013) claimed: The companies that figure out how to generate intelligence from the data will know more about us than we know ourselves and will be able to craft techniques that push us toward where they want to go, rather than where we would go by ourselves if left to our own devices.
Herein Leonard participates in and reproduces the myth. This is evident in how he repeatedly emphasizes the vast amount of data collected and refers to Netflix’s making sense of it as “pure geek wizardry”—ascribing it magical qualities. Although Leonard raises the critical question as to whether we want creative decisions about shows to be left to an algorithm, evident in his thinking is the underlying assumption of data being able to capture and know everything.
From Technology Company to Entertainment Company
Initially, Netflix celebrated its data-driven culture, fueling the myth of big data, and positioning itself as technology company—or what Sarandos termed “a Silicon Valley-based intellectual property company that was born on the Internet” (Curtin et al. 2014, 138). With the Netflix Prize competition (a challenge to improve the accuracy of the recommendation system), their transition from DVD rental company to streaming and promoted data-driven decision making around House of Cards, they openly celebrated big data between 2006 and 2013. The Netflix Prize competition led to reproductions of its marketing message of the recommender system as “of an all-seeing, all-knowing taste expert” in popular press (Frey 2021, 102). Yet not everyone embraced the hype of the autonomous functioning of data and AI. In The New Yorker law professor Tim Wu, for instance, responding to such dominant representations in 2015, claimed that he believed the secret sauce of Netflix’s algorithms to be a human. His position, however, points to the dominant understanding of the company at the time as algorithmic.
Netflix’s positioning, however, gradually changed as they withdrew from their “algorithm-above-all” approach (Frey 2021, 198; Wayne 2022). This transition corresponds to when data and algorithms, after a period of hype, received increasingly critically evaluations in popular discourse (e.g., Cambridge Analytica). An explicit distancing by Netflix from the tech industry occurred with the backlash against tech giants in 2019, seen also in the Congressional hearings. At the time, Netflix referred to platform companies as “data-exploiting” and clarified their own reliance on subscriptions fees rather than advertising (Zayn and Serwer 2020). Yet these various platform scandals had ramifications for Netflix too. Steve F. Anderson (2017) writes that as “audiences became more aware of the functioning and limitations of recommendation systems, they also became more critical” (p. 47). Netflix, Anderson states, responded to this criticism by offering a wider view of how their recommendation algorithm functions—on the TechBlog, in public presentations etc. In line with their gradual repositioning, talking in 2020, CEO Reed Hastings pointed out that Netflix has more employees in Hollywood than in Silicon Valley and two-thirds of their spending is on content. Moreover, as a single service that doesn’t rely on advertising, he suggested that they are not a technology company nor media company, but an entertainment company (Sherman 2020).
Netflix displays “the same quality of discursive slipperiness as these other platforms [Facebook and YouTube]” (Lobato 2019, 39). Facebook and YouTube, for instance, have demonstrated strategic resistance to being understood as media companies, a rationale linked to avoiding regulatory responsibilities (Napoli and Caplan 2017). While some journalists and academics see Netflix as a platform company, others state that, because it isn’t open to user-generated content, it should be regarded as a media company (Lobato 2019; Lotz 2017; Poell et al. 2021). Hastings’ statement that Netflix is an entertainment company strategically avoids both the classification as media company, and thus the regulatory responsibilities that come with it, (Lobato 2019, 39) and technology company. Importantly, however, the idea of an entertainment company conveniently suggests neither interpretation is wrong.
Netflix’s Communication: Discursive Analysis Public-facing Content
In what follows I offer a discursive analysis of the Netflix TechBlog, Netflix Research, Netflix on LinkedIn and statements by representatives in the press. Since 2010 the Netflix TechBlog has provided a space where Netflix staff members provide insight into technology issues and the development of their services. The website for Netflix Research launched in 2018 and is characterized as a place for providing outsiders understanding of the work at Netflix, allowing them to connect to Netflix and to stay informed about job opportunities. In both forums employees discuss Netflix technologies without disclosing proprietary information. The LinkedIn account of Netflix is a markedly different outlet. Here the more socially engaged side of the company is communicated.
Guiding my analysis is the question: How do these sources reflect on big data and what assumptions underlie their statements? Here I have identified four main themes that together provide insight into their ideas about algorithms, data and humans: data-driven organization, innovation and experimentation platform, art and science, and social responsibility. Considering also when these statements were made, I identify how Netflix continues to celebrate data and algorithms, but increasingly reference their use of human expertise and creativity. This shift is linked to their positioning as entertainment company and contributes to their strategic ambivalence.
Data-driven Organization
Netflix presents itself as a data-driven organization wherein recommendation is a “key part of our [their] business.” They seek to “personalize as much as possible” and disclose that 80 percent of consumption on the streaming video service comes from recommendations (Gomez-Uribe and Hunt 2016). In 2016 they reveal they are expanding to an additional 130 new countries in a post titled “Recommending for the world #algorithmseverywhere.” The use of the hashtag #algorithmseverywhere underscores the notion that their core business is algorithms. Their insistence on recommendation as their brand is how they differentiate themselves from the traditional television industry and other competitors. It also results in a need to perpetuate trust in the algorithm through the myth of big data (Frey 2021, 112).
Netflix promotes a self-proclaimed “data-driven mindset.” In 2016 they write, “By following an empirical approach, we ensure that product changes are not driven by the most opinionated and vocal Netflix employees, but instead by actual data” (Urban et al. 2016). This is repeated further on in the blog post when they stress “using data to drive decisions.” In their posts until roughly 2017 they conjure and heavily perpetuate an image of themselves as a cutting-edge technology company driven by data.
Netflix claims that everything it does in service of their members. They use phrases like satisfy, love, and sparking joy to characterize the experience they seek to provide. It is always about “helping” members find titles to “enjoy” and centers on the connection brought about by sharing stories (Sudeep 2019). Hallinan and Striphas (2016, 1) have critiqued the Netflix mantra of “connect[ing] people to movies they love” in that it “mystifies the semantic and socio-technical processes by which these connections are made.” It therein sustains magical qualities being conjured upon data and algorithms.
In latter blog entries the role of data is more restricted. For example, machine learning is used “to support decisions and decision-makers, not ML to automate decisions” (Uphoff 2018). Interesting in 2022 the VP of data and insights stated, “Our Data and Insights Team is at the center of a company culture that drives on data rather than chasing opinion” (Tingley et al. 2022, my emphasis). Here data is framed as more objective and unemotional. In that same article the role of data is also nuanced as they claim it supports learning agenda and contributes to decision making. Significant decisions are left to “Informed Captains” who make “judgment calls” after going over the relevant data and getting input from colleagues. Humans thus form opinions and evaluate. Overall, a more conflicted attitude toward data becomes apparent wherein it cannot be let to its own devices.
In terms of its characterization as data-driven, it is important to point out that data have always been integral to the media industry. In their attempts to better know their audiences and have guided industry professionals’ decisions about content production (Hallinan and Striphas 2016). However, unlike traditional television, which relied on small sample sizes when assessing viewer behavior, Netflix collects data about “all” viewing habits of its subscribers in real time. This later fact of scale has fueled imagination around the potential of big data to capture the whole domain and make accurate and objective calculations regarding taste.
Innovation and Experimentation Platform
There is foremost a deep-seated commitment and self-proclaimed passion for innovation. Netflix claims to lead research and innovation in recommender systems, participating and sponsoring conferences and organizing workshops. The language is profuse with notions of “evolution” and “improving” always taking things to the “next level.” The launch of their Netflix Research website in 2018 contributes to this image. Furthermore, metrics are persistently incorporated into their blogs and statements, not just about the sea of data they have at their disposal, but also the number of subscribers, streamed hours, breadth of the catalog and so forth. They also highlight the complexity and technological sophistication of their mission. They refer to their work on their recommender system as a “Netflix Recommendation saga.” The use of the term saga conjures ideas of a long story of heroic achievement. The 2006 Netflix Prize competition was a particular display of their role as frontrunner in technological innovation. Taken together these blog entries produce the image of the production of revolutionary products and services.
Importantly, as mentioned, the Netflix recommender system differentiates itself from traditional television by stating that it does not rely on demographic data, but all subscriber’s activities on the video streaming service (Wayne 2022). Netflix claims to target content at taste communities rather than demographic categories (Hallinan and Striphas 2016; Seaver 2021). This has been termed post-demographics. Unlike the radical innovation suggested by Netflix, this is not new in that psychographic marketing existed as far back as the 1950s (presented as motivation research) and has assumed various forms in the entertainment industries since the 1990s (Elkins 2019). Moreover, demographics and taste can overlap (Lotz 2020) and are often connected to race, gender, sexuality, and class (Chun 2016, 374).
Connected to their image of data-driven company, Netflix seeks to constantly evolve and improve by learning from data. They focus on experimentation and have a self-declared commitment to the scientific method (Tingley et al. 2022). Here a central role is allocated to A/B testing, underscored by a post titled “It’s All About A/Bout Testing: Netflix as experimentation platform” (Urban et al. 2016). They claim an obsession with A/B testing with the purpose of “failing fast” and incrementally being able to improve their service with “increasing rigor and sophistication” (Krishnan 2016). Interesting, corresponding to their commitment to the scientific method, Netflix says they actively confirm and falsify ideas and hypotheses with A/B testing. It proposes an environment that is not driven by data alone, but also informed theory. It thus departs from the idea of theory-free analysis. However, Netflix upholds the myth by suggesting that the ultimate truth is revealed with A/B testing: “nothing proves the value of decision making through experimentation like seeing ideas that all the experts were bullish on voted down by member actions in A/B tests” (Tingley et al. 2022).
An interesting form of empiricism surfaces in their discussions (see also Frey 2021, 109–10). These sources reveal a conviction that big data can provide meaningful and reliable insight into the social. It furthermore fosters an underlying idea that the granularity of the data they collect enables them to generate an accurate depiction of user taste and preferences. This corresponds with the provision by Gilmore (2020, 372) who, in his analysis of Netflix’s positioning as experimentation platform, finds that their language of affinity rather than affect points to the notion that data science can fully capture who a person is.
Art and Science
Netflix increasingly navigates a tightrope between the ubiquity and significance of data and algorithms on the one hand, and professional judgment, creativity, and expertise on the other. This becomes particularly evident in their discussions from 2018 forward. Earlier, I linked this shift to the backlash against data and algorithms and their entry into original programing. This balancing act is, for instance, found in blog titles such as “AVA: The Art and Science of Image Discovery at Netflix” (Riley et al. 2018). Importantly, such phrasing sustains an opposition between art and science by treating them as distinct domains.
In a 2018 interview with Vulture the role of data is downplayed by Ted Sarandos and Cindy Holland, Netflix’s VP of original content. They comment on how relying solely on data creates the risk of making the same content and how looking at past behavior doesn’t say anything about the future (Adalain 2018). The article even cites a showrunner who asks Netflix for more data-driven feedback. Citing Sarandos, “Most of it is informed hunches and intuition. Data either reinforces your worst notion or it just supports what you want to do, either way” (in Adalain 2018). Here, data and humans are portrayed as producing different knowledges. Importantly, though, data is understood to produce objective facts.
Netflix also explicitly addresses the need for humans in the process of artwork selection (Shahid and Haq 2020). Computer-based vision is said to merely produce candidate artworks; the final say is with the creative production team. Some decisions, it observes, are best left to humans, and visually appealing artwork may not best reflect the title’s story. It requires knowledge of the narrative, themes, and characters. Again in 2020, but now in relation to content decisions, it is said that “Content, marketing, and studio production executives make the key decisions” but that data is used for identifying similar titles and predicting audiences’ sizes (Dye et al. 2020).
The Netflix website explains the need to incorporate insights from consumer research and curated metadata to add “a human element” to their algorithmic systems. As is the case with artwork selection this is predicated on the fact that humans have the potential for deep knowledge of content and expertise. Indeed, Netflix uses human taggers for their recommendation system. In 2014, the Atlantic staff writer Alexis Madrigral (2014) explored Netflix’s creation of some 77,000 so-called altgenres including “Visually-striking Inspiring Documentaries.” Here it became apparent how much human labor was required to create these genres. Netflix refers to the work of “in-house experts” tagging over 200 different story data points in presentations (Sudeep 2019, np). Yet still in a comment to a LinkedIn post concerning Editorial Taggers at Netflix in 2021, a user expresses their surprise to learn that human analysts are involved (Netflix LinkedIn Post 2021).
In 2020 three members of the Creative Editorial team at Netflix published a post on LinkedIn about their jobs (McIlwain et al. 2020). In it they describe their work as a cross-over between video-clerk and librarian/taxonomist refereeing to how their team “blends art and science.” They reflect on how their process involves a lot of internal discussion and emphasize their expertise and love for all genres and movies. Their work of grouping titles is seen as providing additional meaning to audiences and making Netflix a “trusted entertainment brand.” The editorial analyst writes about how she checks the tags of titles, writes snippets for each title; “bespoke recommendations that sound more like friends talking to friends about movies and TV shows.” Additionally, she creates a document that serves to align trailers, artwork and other promotion materials. In conclusion she reveals: “and, yes, Netflix doesn’t run on computers alone—there are teams of people adding a “human touch” to everything you put in your watch list.” The remark questions the notion of blending when humans are added to the mix. It furthermore seems to address a particular societal concern, namely the fact that everything is nowadays left to computers. This concern is neutralized by calling on human expertise, emotion and is linked to trust. However, the idea of touch is suggestive of only a very light intervention. Humans soften yet don’t radically upset the potential of these technologies.
Social Responsibility
Netflix is responsive to concerns over the negative impact of algorithms on society as moral considerations emerge in their communications. They claim some responsibility and ability to intervene. For instance, aside from making recommendations accurate, they also want them to be diverse and “appeal to your range of interests and moods” (Amatriain and Basilico 2012) repeated as a desire for addressing “the spectrum of members interests” (Alvino and Basilico 2015). Personalized recommender systems moreover presuppose that “while everyone’s taste may be different, the way we as humans individually come to our taste is essentially biological and universal” (Cohn 2019, 108).
Recommender systems predict the future based on past consumption and its associated behavior; in so doing, they install the future they predict (Birhane 2021; Chun 2016). Effectively they stifle a more diverse range of consumption by suggesting familiar themes and variations, thereby ignoring the complexity of taste. In a presentation the harm of “too much personalization” is discussed within the context of Eli Parisers’ filter bubble and it is said that exploration and diversity are built into their algorithms (Sudeep 2019). The discussion also expands beyond diversity with the identification of “freshness,” “novelty,” and “fairness” as trends in personalization (Sudeep 2019). Here the ability to influence the recommender system by changing how these systems are designed is made apparent. It becomes the responsibility and duty of Netflix employees to restrain and adapt algorithms for the public good.
Writing about the row “Continue Watching”—referring users to content previously watched and might want to resume—they comment on the need to be “thoughtful.” They want people to “Binge Responsibly” because they find it important that members explore new content (Taghavi et al. 2016). Issues of filter bubbles and addictive binge-watching are both examples of moral concerns that have animated public discourse. Underlying these concerns is a technologically determinist approach in which users are seen as defenseless against new technologies. Netflix attends to these concerns and attempts at diffusing them by reassuring the public that it is in their own interest to have their members consume diverse content. Again, they reveal that these systems aren’t autonomous and that they intervene in their design to prevent harm.
Connected to the earlier comment of the development of personalized visuals, Netflix states it wants to avoid “clickbait.” Each artwork is supposed to be “engaging, informative, and representative of a title” (Chandrashekar et al. 2017). To achieve this goal, they explain how they rely on artists and designers who work with images selected and prepared by algorithms. Again, ideas of affect, expertise and contextual knowledge are used as arguments for having artists and designers make the final decisions. Nonetheless, data and algorithms are celebrated for the speed and scale at which they work.
Within Netflix’s own communication we see the celebration of big data and algorithms dwindle down. There is increasing concern for revealing that they incorporate a “human touch” in their products and services that is seemingly responding to societal concerns. As discussed, however, here data and humans are treated as having opposing characteristics and epistemic potentials. Data are seen as “accurate” and “objective” working at great speed and on a large scale, while humans are lauded for their “emotion” and “expertise.”
Broader Public Discourse: The Perils of Data and Algorithms
Having examined how Netflix, in public-facing materials, discusses big data, I now look at how those ideas are taken up in public discourse by exploring backlash reported on by popular press. More specifically, I explore how Netflix is understood by the public in terms of its use of data and algorithms and the role of humans. The first examines the tensions between curation and algorithmic recommendation, followed by issues of racism in personalized artwork and finally the cancelation of shows by the algorithm. These address different areas—distribution, production, and reception—where data is used. It suggests a negative evaluation of the impact of data and algorithms on culture and society and an understanding of Netflix as strongly guided by these technologies.
Curation Versus Algorithmic Recommendation
Negative evaluation of data and algorithms is found in a scathing editorial for Harper’s by the award-winning director Martin Scorsese (2021). In it he bemoans how the algorithms of streaming video services like Netflix are engaged in reducing the art of cinema to mere “content.” Scorsese makes a plea for human curation, which he goes on to describe as an “act of generosity—you’re sharing what you love and what has inspired you.” It is evident in the offerings of streaming video services such as Criterion Channel, TCM, and MUBI that are curated.
A focus on human-curated playlists become again popular in video streaming services in 2019. It was that year that HBO launched a promotional website called “Recommended by humans” composed of paid fan testimonials (Spangler 2019a, 2019b). Following in 2020, HBO Max was launched. It positioned itself in opposition to Netflix as a “human-first platform,” as one journalist suggests, responding to “the impersonal nature of algorithmic recommendations” (Alexander 2020). Around this time Netflix started experimenting with curated collections too. This resurrection has been linked to controversies over their cancelation of shows at the will of algorithms (Newman 2019) and characterized as an attempt to put agency back with humans (Lattanzio 2019). It has been interpreted as a way in which the shortcomings of algorithms can be compensated for (Newman 2019). As identified earlier, Netflix displays itself aware of the shortcomings of the algorithm. They seek “to incorporate more of a human element in our systems.” 1 Investment by companies in human curation is seen as a desire for “expertise, distinctive esthetic judgments, clear expenditure of time and effort” as well as the need for a narrative, an explanation, rather than correlations (Bhaskar 2016). Humans, then, are seen to generate different types of knowledge linked to expertise and creativity, but also the capacity for surprise.
Artwork Personalization
Artwork is key in Netflix’s efforts to drive engagement and increase streaming time. Initially, Netflix sought to select the best artwork for each title presenting the same visual to all users. However, Netflix nowadays matches the best artwork for each user, highlighting aspects of the title that a specific user would presumably find appealing. For example, romcom enthusiasts are likely to be offered artwork for the movie Good Will Hunting showing Minnie Driver and Matt Damon kissing, whereas (non-romantic) comedy fans might get imagery featuring Robin Williams (Chandrashekar et al. 2017). A preference for certain cast members might also be highlighted. These artworks are “entryway paratexts” that prepare us for titles and frame our interpretation of them (Gray 2010, 79). Importantly, the idea of targeting niche viewership is not a novel practice; in the past there was an increasing trend of using different trailers to attract multiple audiences to particular films (Gray 2010, 52).
The experiments around personalized artwork have been met with backlash. Users complained about being targeted by ethnicity as they were shown thumbnails of black cast members when these only had very minor roles in the films or show. These thumbnails were found to be misleading in suggesting that these were leading characters (Nguyen 2018). For example, the promotional artwork for the comedy Like Father (2018), featuring Kristen Bell, Kelsey Grammer, and Seth Rogen, showed the black actors Blaire Brooks and Leonard Ouzts, both of whom had limited screen time in the film (Brown 2018). Ruha Benjamin (2019, 9) writes: “Why bother with broader structural changes in casting and media representation, when marketing gurus can make Black actors appear more visible than they really are in the actual film?” Netflix responded that they do not know their individual users’ race, ethnicity, or gender and added that these factors play no part in personalization. This response reveals the problematic assumptions that underlie the targeting of taste clusters. At the same time, it also presents the system as all-knowing—creeping in on your identity. It echoes also the now famous story of the closeted lesbian who was ousted when Netflix released viewing habits of users without properly anonymizing the dataset.
In the Wall Street Journal article “At Netflix, Who Wins When It’s Hollywood versus the Algorithm?” reporters Ramachandran and Flint (2018) write about an internal debate at Netflix in 2016 concerning results from image testing. They had found that artwork for the second season of Grace and Frankie performed better for Netflix’s US subscribers when Jane Fonda (Grace in the series) wasn’t included. This conflict was described as a rift between the content team in Hollywood and the tech team in Silicon Valley. Whereas the former was concerned about contractual engagements and retaining a good relationship with Fonda, the latter believed they “shouldn’t ignore the data” (Ramachandran and Flint 2018). In the article, data and algorithms are literally pitted against humans. Their tension in the industry is highlighted with a quote from a disgruntled Hollywood executive representing talent in negotiations with Netflix: “The algorithm became this Wizard of Oz. It was this all-knowing, all-seeing, ‘Don’t f—with us’ algorithm” (in Ramachandran and Flint 2018). Now, due to venturing into original programing, Netflix is said to have shifted its focus and is “learning to temper its love of data models and cater to the wishes of A-listers and image-conscious talent” (Ramachandran and Flint 2018). Importantly, as is explicitly addressed, the “overriding of metrics” is not done because of a lack of trust in the algorithms, but because they need to appease talent.
Cancelations
There have furthermore been controversies about Netflix’s cancelation of shows. Following the cancelation of the adult animated program Tuca & Bertie in 2019, Lisa Hanawalt, its production designer and producer, boasted on Twitter about the show’s critical acclaim. She lashed out at the Netflix recommendation algorithm: “None of this makes a difference to an algorithm, but it’s important to me and the way I want to continue making art in this world.” In replies to this tweet several Netflix viewers responded that they had no idea that the show even existed and went on to blame the recommendation algorithm. It is cast as a limited for not being able to understand the critical acclaim lavished on the show. Here a familiar stark divide is drawn in which humans are portrayed as warm, creative and loving on one side, and the algorithm as cold, rational and detached on the other. Moreover, such criticism suggests that recommender systems are powerful tools for suggestion when, in fact, users consult a whole range of sources in deciding what to watch (Frey 2021, 133). Interestingly, under continued pressure of fans of Tuca & Bertie, the series was picked up by Cartoon Network’s Adult Swim for a second and now even third season.
Moving Beyond the Data-human Divide
Although Netflix has celebrated its data-driven culture, the conviction in the capacity of big data is changing, with criticism being regularly leveled against data and algorithms. Data and algorithms are no longer pure “magic” as their limitations and capacity to create harm are increasingly understood. Discussions now equally address the limitations of data-driven systems and acknowledge the need for human expertise and creativity. To differentiate itself and add value to the market, Netflix needs to uphold their image as innovative company, leading the datafication and algorithmization of society. They therefore continue to celebrate data and algorithms, but now also create room in their narrative for humans and their creativity and expertise. This opening up corresponds to their entry into original programing and coincides also with their strategic shift from a technology company to an entertainment company. Interestingly, is seems that the popular understanding of Netflix continues to see it as primarily driven by data and algorithms.
Problematically, in this broader data discourse, humans and data and algorithms are cast as distinct entities with squarely different capacities and epistemic potentials. Algorithms are described as cold and detached, unable to account for context or deal with relations. Humans and—in the case of Netflix, their “touch”—are a safeguard against their limitations. While their reliance on big data has been downplayed, in that it is now often said to “assist” and “supplement, not replace” human decision making, humans often remain distinct from the technical system, standing outside of it. A similar tendency for human and machine opposition has been identified by Bernhard Rieder and Yarden Skop (2021) within the context of online speech moderation. They comment that this opposition is too simplistic and monolithic and that it obscures how machines and humans blend in concrete systems.
Even though big data is more modestly positioned in narratives by Netflix, we remain with an unproductive discourse. Just like being characterized by magical qualities, the opposition between data and humans leads to the evasion of scrutiny of technological systems. Netflix is certainly also the product of human practices—an important point of departure going forward. This is not to ignore the fact that data and algorithms play an important role, but rather to emphasize the need for open discussion about when, how, and why to use them. It entails moving away from discussions of “the data” and “the algorithm” in the abstract. Instead, we should question socio-technical systems in their situated contexts, which in the case of Netflix, inform how phenomena such as film/television, identity and taste are understood. It also entails resisting hypes, grand tales of revolution and innovation, to charting change and continuity with earlier tools and practices.
Footnotes
Acknowledgements
The author thanks the anonymous reviewers for, rather than dismissing the first unfocused submission outright, providing critical and constructive feedback. The author is grateful to Markus Stauff for conversations and collaborations that have been beneficial to her thinking on the topic.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
