Abstract
The news that young people consume is increasingly subject to algorithmic curation. Yet, while numerous studies explore how algorithms exert power in citizens’ everyday life, little is known about how young people themselves perceive, learn about, and deal with news personalization. Considering the interactions between algorithms and users from an user-centric perspective, this article explores how young people make sense of, feel about, and engage with algorithmic news curation on social media and when such everyday experiences contribute to their algorithmic literacy. Employing in-depth interviews in combination with the walk-through method and think-aloud protocols with a diverse group of 22 young people aged 16–26 years, it addresses three current methodological challenges to studying algorithmic literacy: first, the lack of an established baseline about how algorithms operate; second, the opacity of algorithms within everyday media use; and third, limitations in technological vocabularies that hinder young people in articulating their algorithmic encounters. It finds that users’ sense-making strategies of algorithms are context-specific, triggered by expectancy violations and explicit personalization cues. However, young people’s intuitive and experience-based insights into news personalization do not automatically enable young people to verbalize these, nor does having knowledge about algorithms necessarily stimulate users to intervene in algorithmic decisions.
Keywords
Introduction
News users are increasingly exposed to algorithmically curated information. People under 25 in particular depend on personalized media for news, such as Facebook, YouTube, and Instagram (Kalogeropoulos, 2019). Moreover, news organizations are increasingly experimenting with personalizing their websites and apps to attract and retain audiences (Lieman, 2019). Thus, algorithms increasingly impact how young people build up understandings of the world around them. Yet, little is known about users’ awareness and experiences of algorithmic news selection. While numerous conceptual studies have explored how algorithms shape everyday life (e.g., Beer, 2017; Diakopoulos, 2015; Willson, 2017), these typically take a technological perspective to uncover how algorithms exert power. Until now, there has been considerably less empirical effort to understand algorithmic culture from the point of view of users, with some notable exceptions (Bucher, 2017; Fletcher & Nielsen, 2019; Hargittai et al., 2020).
Second, despite this recent uptake in scholarly attention for users’ perceptions of algorithms, little is known about the implications of such “folk theories” (DeVito et al., 2018) for user behavior. This applies to people’s news practices in particular (see Min, 2019, for an exception). As Siles et al. (2020) argue, understanding the reciprocal relationship between users and algorithms goes beyond capturing how audiences assume that algorithms function. It also involves considering the consequences of these algorithmic imaginaries, including the strategies and tactics through which users engage with algorithms and how algorithms become incorporated in everyday life (Lomborg & Kapsch, 2019; Siles et al., 2019; Van der Nagel, 2018). Of course, users’ agency around algorithms is substantially restricted by platform structures (Van Dijck et al., 2018). Yet, they are not completely powerless either. Through both explicit (i.e., the manual personalization tools a platform offers) and implicit actions (e.g., adjusting browsing behavior), users may intervene in their decisions (Haim et al., 2018; Min, 2019; Thurman & Schifferes, 2012). These actions help to shape and refine algorithms of personalized news media, as feedback loops between user behavior and future content exposure emerge (Kitchin, 2017; Thorson, 2020). In relation to news, such expressions of “algorithmic literacy” (Bruns, 2019) are particularly important to consider because they (co-)shape users’ exposure to information and their orientations to public life more generally.
This article takes a user perspective to explore the interactions between algorithms and news users. Building upon Cotter and Reisdorf’s (2020) conceptualization of algorithms as “experience technologies,” it asks how young news users perceive, feel, and behave around algorithmic curation on social media and under what circumstances those experiences contribute to their algorithmic understandings. Methodologically, the article employs a combination of the walk-through method (Light et al., 2018), think-aloud protocols, and in-depth interviews. It discusses how such an approach can help overcome three major challenges for studying algorithmic literacy: (1) the lack of an established baseline about how algorithms exactly operate, (2) algorithms’ opacity within everyday media use, and (3) possible gaps in vocabulary among media users to describe their algorithmic encounters. As such, the article contributes both conceptually and methodologically to the small, but growing body of work on users’ algorithmic literacy in an increasingly personalized media environment.
Algorithmic Literacy
With the growing number of personalized media platforms, having a basic understanding of what algorithms are and do has become an indispensable element of news literacy (Head et al., 2020). Especially for users under the age of 25 years, the prevalence of social media and news apps in their media diets means that algorithmic gatekeeping processes increasingly affect the way in which they access news (Kalogeropoulos, 2019). Recommender algorithms can help limit information overload and support users in finding relevant news stories in today’s vast stream of information. However, by nature, these structures also limit user agency, taking automated decisions on what information to display and filter out. Such decisions are far from neutral. Just as knowledge about media production helps citizens judge processes of editorial gatekeeping, understandings of algorithmic personalization are crucial for users to critically assess the completeness and balance of the news they encounter (Powers, 2017). They may then choose to be satisfied with its selections, aim to negotiate algorithms’ decisions by adjusting personal settings, or add additional sources to their news repertoires. In this article, following Bruns (2019), I will use the term “algorithmic literacy” to refer to the combination of users’ awareness, knowledge, imaginaries, and tactics around algorithms.
Despite the increasing influence of algorithms on the information young people encounter, surprisingly, many media literacy programs pay only marginal attention to content personalization. Traditionally, media education has been focused on teaching students how to critically evaluate information, not on how platforms and technologies may affect such content (D’Ignazio & Bhargava, 2015; Head et al., 2020; Mihailidis, 2018). Moreover, due to the commercial nature of most social media platforms, algorithms’ selection mechanisms remain opaque (Pasquale, 2015). This has sparked a number of mainly US-centric studies about users’ awareness of and knowledge about how algorithms mediate the content they encounter (Bucher, 2017; Eslami et al., 2015, 2016; Gran et al., 2020; Rader & Gray, 2015; Schmidt et al., 2019). These works find mixed results: while Rader and Gray (2015) found that almost three-quarters of Facebook users were aware their news feed would not show all posts of the friends and pages they followed, a similar study by Eslami and colleagues (2015) found that only 37% was aware of Facebook’s News Feed algorithm. Findings about users’ interventions in algorithmic news selection are also inconclusive. Whereas Duggan and Smith (2016) concluded that almost 40% of their survey respondents actively curated their information environment, Powers (2017) found that most college students he surveyed did not, because they did not know how to customize their feeds on Google News or Facebook.
These mixed findings can partially be explained by the demographic diversity in user samples studied. Gran et al. (2020) found significant associations between algorithmic awareness and both users’ level of education and gender. Algorithmic knowledge is also predicted by the level of education and has a negative association with age (Cotter & Reisdorf, 2020). According to the authors, however, socioeconomic factors only account for a small proportion of differences in algorithmic knowledge. The strongest predictor for algorithmic knowledge, they find, is users’ prior experience with algorithms: both their frequency and breadth of use. This matters because much previous work on algorithmic literacy pertains to subgroups that are already relatively digitally literate, such as algorithmic activists (Velkova & Kaun, 2019), web shop owners (Klawitter & Hargittai, 2018), and social media influencers (Bishop, 2019; Cotter, 2019). This raises questions about the algorithmic literacy of “ordinary” users. Cotter and Reisdorf’s (2020) findings support earlier conclusions by DeVito et al. (2018), who characterize building algorithmic literacy as a matter of “learning by doing” (see also Bucher, 2017). Conceptualizing algorithms as “experience technologies” (Blank & Dutton, 2012), these authors argue that algorithms are understood through use.
What remains unclear in such studies, however, is what counts as meaningful algorithmic experiences that stimulate such reflections, and how and when these contribute to users’ algorithmic literacy. This article explores young people’s algorithmic experiences. Instead of emphasizing what young people should know about algorithms, it considers the perceived knowledge and tactics that users build through their everyday usage of social media. Such an experiential focus is inclusive to different ways of knowing, including forms that go beyond the cognitive. Consequently, the concept of experience helps us to conceptualize algorithmic literacy as a form of knowledge that is not just rational, but also tacit, intuitive, situated, and lived (Tuan, 1977).
Studying Algorithmic Literacy
Previous work has used various methods to study users’ algorithmic literacy, from survey questionnaires (Cotter & Reisdorf, 2020; Gran et al., 2020; Rader & Gray, 2015) and experiments (Eslami et al., 2015) to focus groups (Siles et al., 2019) and in-depth interviews, often aided by card sorts (DeVito et al., 2018), drawing exercises (Hargittai et al., 2020), or other projective techniques. Of course, methodological lenses always address particular aspects of a research topic while ignoring others. Studying algorithmic literacy, however, comes with a number of additional challenges, which make the choice for a particular research design or tool even more crucial.
First, digital literacy is usually assessed deductively. A list of knowledge, skills, and competences is defined that is considered essential to be able to deal with digital media; then, users are tested (usually via surveys) to what extent their abilities meet these demands (e.g., Plantinga & Kaal, 2018; Van Deursen & Van Dijk, 2010). In the case of algorithmic literacy, however, such a top-down approach is problematic: the black-boxed nature of algorithms makes it impossible to benchmark users’ knowledge and skills. Shrouded in secrecy, even developers at social media companies often only have partial knowledge about how algorithms work. Most are complex combinations of collectively developed pieces of code, where a multitude of computational techniques and variables interact. This makes their logics difficult to entangle, even for highly skilled engineers (Burrell, 2016; Seaver, 2017). Moreover, platforms’ algorithms are continuously evolving in response to the user data they collect (Cotter & Reisdorf, 2020). Therefore, it is difficult to develop any a priori measures of what an algorithmically literate user should know.
A second challenge for studying users’ algorithmic literacy is the opaqueness of algorithms, which largely function “behind the scenes” (Hamilton et al., 2014; Pasquale, 2015). This means that users are unlikely to notice algorithms in their everyday use, until they start producing unexpected, irrelevant, or uncanny results. In her study on Facebook users’ algorithmic imaginaries, Bucher (2017) found that algorithmic awareness is triggered by surprise or consternation, for example, when the algorithm generates incorrect classifications or makes faulty predications. It is in these instances that the assumptions of the algorithm become visible and invite reflections on their workings. However, when algorithms behave as expected, they blend into the background, making them difficult to distinguish. This raises the question of how we can study algorithmic literacy when the interactions between users and algorithms are not necessarily conscious encounters.
Third, much existing work on algorithmic literacy explicitly uses terms such as “algorithm” to assess people’s awareness, knowledge, and skills (e.g., Gran et al., 2020; Min, 2019). While this may seem obvious, previous research shows that knowing such terminology is not self-evident, nor does a lack of technological vocabulary necessarily indicate that a user lacks algorithmic awareness. Siles et al. (2019), for example, found that Netflix users rarely explicitly used the word “algorithm,” but did form hypotheses about why the system showed particular recommendations. Similarly, social media users might not always know the term “algorithm,” but might still be able to reflect on how media customize the news that they see based upon their everyday experiences. Ignoring such experience-based forms of knowledge thus runs the risk of underestimating people’s algorithmic understandings. To study algorithmic literacy, therefore, we need methods that allow to capture more intuitive, tacit forms of knowledge and make it possible to explore openly how people understand and engage with algorithms.
This article takes the notion of experience as a point of departure as a means to overcome these hurdles. It proposes a qualitative research design combining reflexive in-depth interviews with the concurrent walk-through method (Light et al., 2018) and think-aloud protocols (Charters, 2003) to capture young people’s algorithmic experiences. This combination has three advantages. First, both methods allow for exploring algorithmic literacy bottom-up. Whereas the interviews capture users’ knowledge about algorithmic curation, the think-aloud walk-through makes it possible to observe how such algorithmic literacy is applied in practice. Second, while the interviews highlight the unspecific experiences that users acquire over time (Erfahrung), the walk-through and think-aloud methods bring consciousness to users’ concrete encounters with algorithms (Erlebnisse) (see Turner’s, 1985, distinction in Kaun, 2012) that they might normally take for granted (Hamilton et al., 2014). Finally, conducting walk-throughs in addition to in-depth interviews enables users to show, not just tell, researchers their algorithmic experiences. This is useful for users who consciously experience personalization, but lack the technological vocabulary to articulate such encounters.
Methods
This qualitative, exploratory study draws upon semi-structured, in-depth interviews with a diverse group of 22 Dutch young people between the ages of 16 and 26 years, who used social media on a daily basis. This “Generation Z,” sharing the experience of growing up in technologically mediated environments characterized by increasing digital, social, and individual media use, is a relatively frequent user of personalized media platforms, in general, and for news, specifically (Newman et al., 2020). Therefore, they are an interesting demographic for studying users’ algorithmic experiences. In the Netherlands, where the Internet and smartphone penetration rank among the highest in the world, especially people under 20 are heavy social media users, spending over two hours on social media per day (Van der Veer et al., 2021). Interviewees were recruited online through snowball sampling: members of the research team asked teachers, friends, and family to invite initial participants fitting the age demographic, who then suggested additional interviewees. Participants were equally split in terms of gender (male/female) and lived all across the Netherlands. Most interviewees were students; others were recent graduates working in various fields; and some were unemployed. While the aim was to recruit youth with different educational levels via their teachers, online recruitment proved more challenging among students in lower vocational education, leading to an overrepresentation of participants pursuing or having a bachelor’s degree or higher.
Semi-structured interviews, averaging a duration of 46 minutes, were conducted by the author and a student assistant in April–July 2020. Consequently, the outbreak of the COVID-19 pandemic affected the research considerably. Face-to-face interviews were replaced by online interviews via Skype, which allows for screen sharing on both desktops/laptops and smartphones. This made participation in the research more convenient for some participants, eliminating traveling time and being able to participate from the comfort of their homes. Another advantage was the possibility to make screen recordings, providing detailed information on users’ tactics around algorithms that might not have been obtained as easily in-person. However, the use of Voice over Internet protocol (VoIP) tools may exclude less digitally literate youth (Hanna & Mwale, 2017; Lo Iacono et al., 2016). With schools and public libraries physically closed during the peak of the pandemic, young people without a stable WiFi connection or quiet space at home may have felt discouraged to participate, a drawback of VoIP interviewing that affects youth from disadvantaged communities in particular.
Participants were asked to sign an informed consent form (digitally) prior to the interview. As an ice-breaker, the first part of the interview explored the interviewee’s overall mobile and social media use, including platforms used, everyday routines, and spatiotemporal and social contexts of use. Second, participants were asked to move through two to three of their social media apps as they normally would while thinking aloud about the content these platforms presented to them and theorizing why these platforms would display these stories. This walk-through exercise proved extremely helpful for having interviewees reflect on algorithmic curation and provided plenty of avenues to probe for algorithmic awareness, experiences, and tactics. Finally, a series of follow-up questions connected the previous two parts, asking participants to reflect on how algorithmic experiences affected their attitudes toward news and content personalization and their social media use. To avoid steering the conversation, algorithms were not explicitly mentioned until this final stage of the interview (see Hargittai et al., 2020, for a similar approach). Similarly, to explore young people’s experiences of algorithmic news personalization in an open-ended manner, no pre-fixed definition of “news” was given. Instead, the discussion was purposively broadened to any content that young people considered as “things happening that are important to know,” to be inclusive of young people’s increasingly broadening conceptualizations of news (e.g., Edgerly & Vraga, 2020; Swart et al., 2017) that do not always align with classic news values. Participants did not receive any financial rewards for participation. To test Skype’s screen-sharing process, the clarity of the interview questions, and the feasibility of the walk-through task, two pilot interviews were conducted. All interviews were screen-recorded in Skype and fully transcribed.
The interview transcripts were anonymized 1 and analyzed thematically in Atlas.ti. A first round of open coding, examining the data line-by-line, resulted in a set of initial codes. Re-reading the material, these codes were then aggregated in broader categories using axial coding (Corbin & Strauss, 2015). Finally, the data were reviewed again to identify theoretical connections between the different categories. This iterative process resulted in themes related to three dimensions to algorithmic experience: how young people get to know and make sense of algorithms (cognitive), how algorithms make them feel (affective), and what they do around algorithms (behavioral).
Results
Like most members of their generation (Kalogeropoulos, 2019), the interviewees heavily depended on social media to stay up-to-date. (Live) television, radio, and newspapers were almost completely absent in their news repertoires. Some used apps or websites of major news brands, usually a recent habit motivated by the pandemic. However, Instagram, Facebook, Twitter, and WhatsApp formed the most important gateways to news and journalism. News was an integral part of their social media use, although consistent with previous research (Kümpel, 2019; Swart et al., 2018a), it was usually not followed on purpose. Interviewees mostly discovered news via their social connections. This aligns with the prevalent news-finds-me attitude among this age cohort (Gil de Zúñiga et al., 2017; Toff & Nielsen, 2018) and highlights the significance of algorithmic and social curation for young people’s contemporary news use.
All participants used Instagram. Consequently, most of the algorithmic experiences discussed in this article originate from users’ walk-throughs of this platform, where news use has doubled since 2018 (Newman et al., 2020). Other social media reflected upon during the interviews include YouTube (which at the time of research actively recommended COVID-19-related news), Facebook and Twitter (infrequent but important sources for finding news), LinkedIn (used for work-related updates), and the generally less news-focused platforms of Snapchat (via its Discovery section) and TikTok. Although all participants used WhatsApp, a significant space for sharing and discussing news (Swart et al., 2018b), no walk-throughs for the app were conducted, as it does not contain suggested posts or other algorithmic recommendations. For the sake of analytical clarity, this section discusses the cognitive, affective, and behavioral dimensions of young people’s algorithmic experiences separately. It should be noted, however, that in practice, these often interrelate.
Understanding Algorithms
The cognitive dimension to algorithmic experience entails how users come to understand algorithms and how their everyday encounters with algorithms shape these sense-making processes. Young people’s algorithmic awareness varied significantly across the sample. Some had never heard of the word “algorithm” at all; others could describe categorization, profiling, and personalization processes in detail. Reflecting algorithms’ opacity in everyday life, interviewees rarely mentioned the algorithmic curation of their feeds of their own accord until actively probed, with a few exceptions. Consistent with earlier findings (DeVito et al., 2018; Hautea et al., 2020; Rader & Gray, 2015), such spontaneous reflections occurred when algorithms produced unexpected or confusing results:
“I always find those Stories so odd. It always starts with something else. For instance when my cousin or friends post, I very often won’t see it.”
“Why don’t you see what your friends post, you think?”
“I have absolutely no idea. [. . .] It’s very occasionally [that they post], they’re not active on social media. [. . .] But I think I follow about 800 people. Even if only half of them posts a Story, I’m missing out on a lot.”
While Tom (22) initially stated to have “no idea” why not all content was shown to him, he intuitively sensed that something was up and started thinking about possible explanations. This is exemplary for the way participants construct hypotheses about how algorithmic curation works through their everyday use. As for all sense-making processes, when confronted with unfamiliar situations, people draw back upon their past experiences, habits, or knowledge to fill in the blanks and understand what is going on (Hargittai et al., 2020; Reinhard & Dervin, 2012).
The interviews also show, however, that algorithmic awareness and users’ sense-making processes around algorithms are contextual: some algorithms are easier to recognize than others. Partially, algorithmic awareness hinges on the absence or presence of explicit personalization cues. Ellen (20) remarked that “on Twitter you often see, for example, ‘that person likes that tweet’ and that’s why you get to see that tweet. But on Instagram, I don’t—you don’t really have that.” Features branded as “Suggestions” or “For You” pages also signposted the personalization of a certain app or website. Second, the algorithm’s design can make it more or less evident to users. The more bluntly users are categorized and classified, the more “expectancy violations” (Hargittai et al., 2020) occur, and thus the more visible algorithms become. For instance, respondents perceived the personalization of YouTube’s Suggested Videos as obvious or even aggressive. They judged the way the platform infers recommendations from previous watching behavior as “over the top” (Onno, 21) and a too rigid interpretation of their tastes that ignores how users’ news interests change over time (Alvarado & Waern, 2018; Monzer et al., 2020). As Sharon (25) said about a video of celebrity news bulletin RTL Boulevard, “I just watched it once; I don’t want to see it again.”
Young people’s sense-making processes of algorithms revolve around three elements. First, platform plays a role in how young people make sense of algorithmic curation. The interviewees in the sample used three to seven different social network sites. Therefore, they could compare how different platforms worked and assess algorithms in relation to each other. A second sense-making strategy is to compare algorithms within an app or website, per feature it offers. For example, participants made a distinction between the personalization of Instagram’s timeline, the Stories section, and its Explore feature. Finally, the type of content (i.e., recommendations vs advertisements vs “regular” content) affects young people’s awareness and perception of the algorithm. Somewhat surprisingly, high algorithmic awareness in one context does not necessarily make users more conscious about potential algorithmic curation in other situations. For example, targeted advertising on social media was relatively obvious to users (see also Ruckenstein & Granroth, 2020). Yet, users’ knowledge about tailor-made advertisements did not lead participants to reflect on the possible personalization of non-sponsored stories in their timeline. Thus, while folk theories are part of larger data assemblages (Head et al., 2020; Siles et al., 2020), spillover effects of algorithmic literacy between contexts are not self-evident.
No algorithmic imaginary discussed by the interviewees was the same, reflecting the difficulty for users to make sense of algorithmic curation. Contrary to earlier findings, none of the respondents mentioned media education as a source for algorithmic knowledge or imaginaries (cf. Ruckenstein & Granroth, 2020). Similarly, young people’s study or profession rarely affected their algorithmic literacy, except for a respondent who livestreamed on Twitch and one student with programming experience (cf. Lomborg & Kapsch, 2019). Instead, young people based their hypotheses primarily on their personal experiences of social media use (see also Cotter & Reisdorf, 2020; DeVito et al., 2018). Experiences of other users formed an important additional, exogenous source of algorithmic knowledge. For example, Finn (22) commented that the weight that YouTube assigns to recency is “less than on Instagram, you can still see recommended videos from 2012. And that happens to multiple people simultaneously, because you see that in the comments, that people say: okay, why is this recommended to me only now?” Similarly, interviewees mentioned how controversies about major changes in algorithms, such as Instagram’s shift from chronologically ordered to “relevance-based” timelines, invoked discussions among friends or other users they followed on social media. A final source of algorithmic literacy is reporting by popular media. An example is Amber (21), who mentioned a YouTube video about the “microphone hypothesis” (see also Head et al., 2020). This folk theory assumes that social media companies base their content on (illegally acquired) recordings of users’ conversations: “Two people started to talk about cat food the entire week and then they only got push messages about cat food.” While this theory has not been proven scientifically (Pan et al., 2018), it was notoriously popular among the interviewees. This shows how media coverage of privacy scandals, in combination with algorithms’ opacity, can create collective folk theories that guide users’ algorithmic imaginaries (see also Bucher, 2018). Interestingly, these perceptions did not discourage young people from using social media. However, they did affect users’ emotions around platforms’ algorithms, a second dimension to algorithmic experience.
Sensing Algorithms
The affective dimension to algorithmic experience considers the moods, affects, and sensations that algorithms generate, which may also invite reflections on algorithms and contribute to users’ understandings (Bucher, 2018; Kennedy & Hill, 2018; Ruckenstein & Granroth, 2020). Whereas young people’s imaginaries of what algorithms do differed widely, there was less variation in users’ perceptions of what algorithms are and how these imaginaries made them feel. Three perceptions can be distinguished among algorithm-aware participants. First, one group of respondents conceived of algorithms as a neutral “formula” (Arnold, 23), “recipe” (Sophie, 18), or “calculation process” (Ari, 23) that requires input (i.e., data) to generate a certain output. This technological perception frames algorithms as a mathematical, rational process. Consequently, this group displayed no strong emotional response to algorithms: they were simply a means to an end. A second group perceived algorithms as useful guides or “traffic controllers” (Iris, 22) that highlight stories that are relevant. These young people mainly emphasized the benefits of recommender systems, such as saving time and discovering news they might not have uncovered themselves. For example, Michel (23) liked how Facebook’s algorithm would surprise him, even though its suggestions were not always apt: “It’s got something. It’s why you see random things that you normally wouldn’t see, that sometimes are unexpected.” Finally, for some interviewees, algorithms invoked strongly negative emotions. They connected algorithms to the commercial nature of social media companies and thought of them as powerful forces intended to stimulate purchases. This is done by “constantly rewarding and feeding you” (Tom, 22) with messages “you are most susceptible to” (Guus, 23). Algorithms were also linked to issues of censorship, due to the unknown content that recommender systems made them miss out on (“you don’t know what you don’t see”).
Algorithmic curation invoked reflections, in particular, through feelings of surprise, positive and negative. On the one hand, interviewees experienced the sense of being recognized by algorithms as pleasant (see also Ruckenstein & Granroth, 2020). As Roel (26) explained, a “good” algorithm ensures a smooth user experience and makes you feel seen. Reversely, algorithms that misrepresented young people’s tastes or were too simplistic appropriations of their interests invoked irritation. Knowing you are being surveilled raises expectations: there should be a fair trade-off between giving up data privacy and ease of use (Kennedy et al., 2017). Simultaneously, however, algorithms that functioned too well created the uncomfortable feeling of being watched or even exploited, especially when users could not logically deduce its assumptions from the data they had consciously supplied. Such emotions invite further reflections. As Joeri (19) said, “As a user, it’s kind of awkward. You think: why does this happen? How do they know this?” Recent privacy scandals around Facebook and uncertainty about what data were being collected had fostered skepticism toward personalized news apps or websites. For example, while Marit (16) said she would like news apps to be tailor-made to her preferences, she immediately added data privacy concerns as a condition: “You know, as long as they don’t eavesdrop on you via your microphone.”
As the previous example shows, young people’s emotional experiences of algorithms play into their norms and attitudes about how algorithms ought to function. For news apps or websites specifically, users mentioned three expectations, which reflect their affective experiences of algorithms on social media. First, as mentioned above, social media’s surveillance practices have resulted in suspicion toward all recommender systems, which they fear will collect user data without consent. Young people suggested that increasing algorithmic transparency (Diakopoulos & Koliska, 2017) might be a way for news organizations to overcome this. For example, Joeri (19) suggested to provide an overview of the data points that recommendations are based on, to help users “understand how they know.” Second, although surprising content suggestions on social media were generally experienced as irritating or odd, on news sites and apps, young people actually preferred to receive unexpected content. They saw the supply of topically diverse news as a key role of media organizations (see also Fletcher & Nielsen, 2019; Thurman et al., 2019). Algorithms, however, for these youth were automatically linked to the reinforcement of existing user preferences. They found the notion of personalized media as commercial enterprises, aiming to maximize attention and time spent, difficult to align with journalism’s public and democratic role. Finally, checking the news and staying up-to-date is about being on top of things. Algorithmic curation, however, by nature represents a loss of control. Although recommender algorithms might have advanced considerably over the past years, confirming previous findings (Groot Kormelink & Costera Meijer, 2014; Sørensen, 2013), young people were still afraid algorithms would make them miss out on important news. Some also considered their news preferences as too complex and variable to be generalized in algorithmic models. Although they appreciated “not having to filter yourself what you want to see” (Jasmijn, 20), interviewees did at least want to have the opportunity to view all available content and to manually adjust user profiling (Alvarado & Waern, 2018). Of course, making manual personalization options available does not necessarily mean such functionalities will be used in practice: previous work has reported users consider them too laborious to use (Groot Kormelink & Costera Meijer, 2014; Monzer et al., 2020; Sørensen, 2013). This brings us to the third dimension of algorithmic experiences: users’ behavioral experiences.
Engaging With Algorithms
The behavioral dimension to algorithmic experience relates to what young people do around algorithms. Through their everyday interactions with algorithms, young people may build up understandings of algorithmic news selection. The interviewees were aware of various explicit personalization strategies (Haim et al., 2018) through which they might intervene in the composition of their news feeds, such as unfollowing accounts and hashtags; using a platform’s “hide,” “mute,” or “report” function; or setting up notifications for particular accounts to not miss out on new posts. Other suggested practices could be classified as practices of “gaming the system” (Cotter, 2019), such as deliberately not clicking on posts to prevent the display of similar content, installing ad blockers, or using a Virtual Private Network (VPN). In practice, however, interviewees rarely engaged in such tactics, for four reasons.
First, young people imagined their own role in shaping algorithms as limited. While some, like Tara (18), were confident they could compose their own timelines, most participants thought of algorithmic curation as something that just happened to them on social media and was beyond their control. As Caroline (23) jokingly put it, “[You] just search less and trust in the algorithm gods, sort of, to present you with interesting things and make good choices on your behalf.” Contrary to previous findings (Monzer et al., 2020), the reciprocity between algorithms and users was not intuitively understood by participants. Young people with low algorithmic awareness in particular tended to downplay the effect of their own behavior on the algorithm. Others had used explicit personalization options in the past, but were unsatisfied with its results. Amber (21) complained that while Instagram’s “hide” button gave the illusion of agency, it was of little help in making her timeline more relevant: “For the recommendations, they sometimes make the same people or things return that you have already rejected or that you didn’t like, so that’s quite annoying.”
Second, algorithms’ feedback loop conceptualizes users as rational beings who consciously and deliberately engage with news. For the algorithm, users are simply what they do, neglecting elements that complicate such a behavioralist conception of the self (Fisher & Mehozay, 2019). However, news users regularly navigate media in ways that are not (fully) representative of their preferences, tastes, or interests (Swart et al., 2017). Similarly, the interviewees typically did not actively strive for the most relevant, interesting, or newsworthy feed possible. During the walk-through, participants would frequently comment on accounts that they followed, but thought to be uninteresting. Yet, they did not unfollow such accounts: in practice, routines appeared difficult to break. The habitual nature of news consumption explains why young people would keep clicking or watching items they described as “junk,” even when they were aware that such behavior might result in similar recommendations in the future.
Third, challenging algorithms takes effort. As Michel (23) explained, “I know there are options- so these messages will no longer be displayed. I think if you’d actively do that, Facebook would become better, but I don’t take the time to do it myself.” Instead, it was easier to scroll past irrelevant content. Especially on TikTok, where even following other users is optional, such passive usage was attractive. Ari (23), for instance, followed only a handful of users on TikTok, expecting his “For You” feed to improve automatically over time as he provided more data: “I sort of let it run its course. [. . .] I won’t on purpose think: I just want soccer now, so I’ll search for soccer. I don’t do that.”
Finally, young people overall were reasonably content with how recommender algorithms classified them. Although algorithms’ occasional oversimplifications of their interests invoked irritation, none of the interviewees had experienced effects of user profiling that were explicitly discriminating (cf. Bucher, 2017; Lomborg & Kapsch, 2019). Possibly, users who do experience such harmful, offensive consequences are more likely to engage with algorithms to actively resist their decisions (see Velkova & Kaun, 2019).
Conclusion
Understanding and knowing how to deal with algorithmic curation have become crucial for critically and mindfully navigating today’s increasingly personalized media landscape (Head et al., 2020). This article explored young people’s experiences of news personalization on social media to understand under what circumstances people’s everyday encounters with algorithms contribute to their algorithmic literacy. It argues that to fully understand how platforms exert power in everyday life, we should capture how news users make sense of, feel about, and interact with algorithm-driven media on their own terms, including their intuitive, affective, and experience-based forms of knowledge.
While algorithms have been conceptualized as experience technologies that users learn about by doing (Cotter & Reisdorf, 2020; DeVito et al., 2018), previous research also shows that using algorithmically curated systems does not automatically facilitate literacy for all (e.g., Powers, 2017). This article finds this process hinges on three conditions. First, awareness of the reciprocal relation between users’ own behavior and algorithms’ decision-making needs to be triggered. This occurs when algorithms violate users’ expectations (Bucher, 2017; Hargittai et al., 2020), but also through platforms’ explicit disclaimers why certain posts are shown. Second, the findings suggest that breadth of use contributes to algorithmic literacy more than frequency of use. While algorithmic awareness was context-dependent, the more platforms participants used, the more elaborate they could reflect on what algorithms are and do. This included understandings of how algorithms represent particular values and interests, and how news personalization could affect their informed citizenship. Finally, where expectancy violations can foster insight into algorithms, they may also hinder users’ sense of agency and their engagement with algorithms. Algorithmic interventions are stimulated when interactions with algorithms are experienced as positive, for instance, when prior use of “hide” or “like” buttons resulted in story suggestions that were perceived as helpful. When such experiences do not live up to users’ expectations, however, this contributes to a sense of losing control and discourages future interventions.
Furthermore, while experiences with algorithms do contribute to young people’s algorithmic literacy, they do not automatically equip them with the vocabulary to articulate such tacit knowledge. Most participants in this study were highly educated. Moreover, the study was conducted in the Netherlands, a nation characterized by high levels of Internet, mobile, and social media use. These interviewees could therefore be expected to be relatively digitally and algorithmically literate. Yet, confirming studies conducted in other countries (e.g., Siles et al., 2019), interviewees did not necessarily use the term “algorithm” when talking about their experiences of news personalization: some had not even heard of the word. Such gaps in vocabulary have important methodological implications for studying algorithmic literacy. This article has suggested walk-through interviews as a methodological approach that may help scholars (beyond demographic groups and country contexts) to overcome these limitations. By asking more indirect, open-ended questions about why participant see what they see on personalized media and probing for algorithmic experiences based on users’ displayed practices, such methods can give more inclusive perspectives on algorithmic literacy.
Finally, the interviews suggest that news media aiming to further personalize their content may have to overcome considerable skepticism. Young people were hesitant of algorithmically curated news, due to fears of missing out, concerns around surveillance, and because they wanted journalism to contain elements of surprise (see also Thurman et al., 2019). They found it difficult to conceive of algorithms beyond their role of amplifiers that reinforce existing user preferences, preferring editorial over algorithmic news curation. This is understandable if we take into account the social media algorithms that young people are most familiar with. However, this also means they had trouble imagining that news organizations might choose to employ algorithms differently (Möller et al., 2018). Future research could consider whether an increase in algorithmic transparency, for instance, by presenting more explicit personalization cues or granting users more opportunities to intervene in user profiling, might mitigate such concerns.
Footnotes
Acknowledgements
The author would like to thank Silke Wester for her help with recruiting participants, conducting part of the interviews, and transcribing the recordings.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research for this article was funded by a Writing Up Grant from the Faculty of Arts, University of Groningen.
