Abstract
Widespread misleading stories circulating in networked public spheres have raised debates about their potential harm to democracies, organizations, and individuals. In the face of this challenge, educators have been rightly questioning how to prepare students to thrive in this so-called post-truth era. Scholarship on media and information literacies has often focused on incorporating new topics to address the issue and re-articulating learning goals. This body of work, however, does not address the question of how to deal with fast-paced changes that surround information disorder in the digital age. Based on Stuart Selber’s multiliteracies, this article proposes a set of competencies in combination with an analysis of the factors that contribute to the creation and circulation of false information. My argument focuses on students’ need to effectively identify misleading stories, thoughtfully question the role of technology in society, and ethically engage in civic dialogues. Taken together, these skills and knowledge provide a framework that they can expand upon as the landscape of information disorder shifts.
The circulation of misleading information in public spheres is not a new phenomenon; however, the rise of the Internet and social media gives new contours to the problem (Hobbs & McGee, 2014). Across the world, individuals and institutions are dealing with the impacts of the widespread proliferation of false or exaggerated stories (Marwick & Lewis, 2017; Newman et al., 2017, 2018; Tufekci, 2017). In the United States, the Pew Research Center surveyed 1,002 adults and revealed that 64% of them believe that fabricated stories create a great deal of confusion about current events (Mitchell et al., 2016). Individuals across several demographic indicators, such as age, race, educational level, and income, widely shared the same disorientation sentiment. There is growing evidence of malicious groups’ attempts to interfere with democratic institutions and processes through disinformation campaigns (Christopher & Matthews, 2016). For instance, it is possible to observe an increased polarization in political debates and distrust in traditional news outlets inside and outside the United States (Newman et al., 2018). The issue also raises concerns about “the long-term implications of disinformation campaigns designed specifically to sow mistrust and confusion and to sharpen existing sociocultural divisions using nationalistic, ethnic, racial and religious tensions” (Wardle & Derakhshan, 2017, p. 4). These trends are commonly described as the fake news phenomenon. However, I refrain from adopting this expression because it does not fully capture the complexity of the problem. Instead, I utilize Wardle and Derakhshan’s (2017) information disorder concept that encompasses the intentional and unintentional circulation of misleading, false, or harmful news in networked public spheres.
Regardless of the name employed to outline the issue, young adults are especially susceptible to false information because of their tendencies to consume most of their news through social media (Shearer, 2018). Research reveals that despite spending time on the Internet, college students are not always prepared to correctly evaluate the quality of online information (Kahne & Boywer, 2017; Wineburg & McGrew, 2017). Given these challenges, educators are justifiably questioning how they can prepare learners to become well-informed in this so-called post-truth era. Although there is an overall consensus that media and information literacies are fundamental to deal with the issue at hand, they might not eradicate it. For instance, in an experiment that asked young participants to judge the trustworthiness of simulated online posts, Kahne and Boywer (2017) found that individuals who reported higher levels of media literacy also displayed better abilities to discern accurate and deceptive facts. In light of this, many scholars advocate for curricular changes that encompass new types of content. Hobbs (2017), case in point, argues for teaching students the different genres of misleading information that are usually conflated under the umbrella of fake news. Bulger and Davison (2018) highlight the importance of fostering cross-disciplinary collaboration and discussing new media environments where false stories are spread. While others focus on instructional practices, such as project-based learning (Friesem, 2019), some defend revisiting the goals of media and information literacy. For example, Mihailidis and Viotty (2017) propose switching the focus from evaluating messages toward reflecting on how one’s media consumption impacts others.
While this body of work offers essential contributions to the topic, none of them addresses the central question of how to deal with fast-paced changes that surround the information disorder phenomenon, which impedes civic dialogue. For instance, new technologies used for deception evolve, as well as algorithms; social media platforms change their terms of service; political groups adapt their strategies; and so on. Thus, I argue that students must have a set of knowledge and skills (competencies) that allows them to understand shifts in the landscape of information disorder. Selber’s (2004) multiliteracies approach guides my argument by suggesting that individuals need to become functionally, critically, and rhetorically literate. Thus, I articulate how these three dimensions can be adapted to prepare learners to examine information disorder. They need to be capable users of technology with an ability to identify the genres of misleading stories and evaluate their trustworthiness. Moreover, they must also become thoughtful questioners of technology who understand the spread of false information. Finally, this is central to the topic of social media dialogue because they must be reflective media producers who analyze media affordances and ethically engage in conversations about civic matters. Taken together, these competencies provide learners with a framework that they can expand upon.
Multidimensional Challenge
The term fake news is broadly applied in discussions regarding becoming informed in the digital age. Many scholars, however, argue that the expression conceals the issue’s complexity (Friesem, 2019; Hobbs, 2017; Marwick & Lewis, 2017; Wardle & Derakhshan, 2017). A more precise terminology can better capture the full spectrum of misleading stories that range from skewed accounts to complete fabrications. For instance, the European Association for Viewers Interests (2017) identified 10 types of misleading news that differ in their motivation and impact, such as propaganda, clickbait, hoaxes, and conspiracy theories. Similarly, Wardle and Derakhshan (2017) wrote a report to the European Council that highlights nine types of information that fall under three categories: • Mis-information is when false information is shared, but no harm is meant. • Dis-information is when false information is knowingly shared to cause harm. • Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere. (Wardle & Derakhshan, 2017, p. 5)
Information disorder is by no means a new phenomenon; however, the Internet age has amplified the speed, scale, and scope that it can reach. For this reason, it is fundamental to understand the several factors that contribute to the problem.
Technological Affordances and Uses
Individuals and groups trying to disinform others tend to spread their messages through blogs, websites, message boards, forums, and mainstream social media sites, such as Facebook and Twitter (Marwick & Lewis, 2017). According to boyd (2014), four technological affordances shape the communication taking place in these digital spaces: persistence, spreadability, searchability, and visibility. Overall, online messages and expressions do not disappear after a person sends them, which enables others to search for information and spread it. All of these aspects boost the visibility of online content. Algorithms are also crucial to consider as they are one of the opaquest features of digital platforms and determine the profile of content on search engines and social media feeds (Klawitter & Hargittai, 2018). To increase user engagement, often, social media platforms use algorithms to tailor content to their users creating filter bubbles (Pariser, 2012) that can potentially alienate individuals from having access to diverse perspectives.
Digital technologies magnify the convenience of fabricating or manipulating information with the use of tools that range from cheap and user-friendly software to machine learning programs (Paris & Donovan, 2019). For instance, deepfakes feature a cutting-edge technique that allows computers to create realistic videos of any person saying or doing anything. There are also technologies employed to spread information without human assistance, such as bots. These networks of automated accounts on social networking sites, such as Twitter, have been used to skew political debates inside and outside the United States (Howard & Kollanyi, 2016; Marwick & Lewis, 2017; Yuan et al., 2019). Taken together, these tools can amplify misleading stories.
This media environment also enables different news consumption habits. A report from the Reuters Institute highlights that 55% of individuals in 38 countries prefer to access the news through search engines, social media, or aggregators (Newman et al., 2018). In the United States, 47% of respondents said that their first contact with the news happens through social media and messaging apps. While Facebook ranks as the most used platform in the surveyed countries, other applications are beginning to attract a higher number of users, such as Instagram and WhatsApp. The current media landscape offers a high-choice environment to its users. This scenario can broaden one’s perspectives or enable echo-chambers (Sunstein, 2017). Individuals become trapped in echo-chambers that shape their worldviews when they use the Internet only to absorb content aligned with their interests or beliefs (Sunstein, 2017). This scenario raises communication challenges because shared experiences are a pre-condition for a well-functioning democracy as they create a common ground for individuals to engage in dialogue and negotiate their differences (Sunstein, 2017). Research highlights that interest in politics and diversity in media consumption can reduce the likelihood of being in an echo-chamber (Dubois & Blank, 2018). However, without nurturing this disposition, individuals can be more susceptible to radicalization and conspiracy theories (Marwick & Lewis, 2017).
The extent to which online audiences are consuming news that only align with their perspectives is not clear. There is empirical research suggesting that this is the case in the United States (Dilliplane, 2011; Levendusky, 2013; Mitchell et al., 2014). However, Nelson and Webster (2017) argue that these studies offer only a partial view of people’s news consumption habits due to methodological limitations. In their investigation about partisan selective exposure to news, they utilized web analytic data while exploring the ideological profile of sites and paths that readers took to reach them. The findings reveal that audiences normally check established sources, such as the New York Times, Yahoo, or ABC. Furthermore, all sites in their sample, “including the more obscure, more partisan political news outlets, attract ideologically diverse audiences in proportion with the overall online audience” (Nelson & Webster, 2017, p. 10). Nevertheless, their findings do not refute the fact that misleading stories broadly circulate online.
As mentioned before, information disorder propagates mainly through the Internet, which affects, in particular, students because they are more likely to learn about the world through new media rather than traditional outlets (Gasser et al., 2012). The time in front of screens, however, does not necessarily improve their evaluative skills. A group of researchers at Stanford University collected and analyzed 7,804 student responses to 56 tasks designed to test the ability to assess the quality of online information (McGrew et al., 2017). They administrated these tasks at the middle/high school and college levels across 12 states. Despite the diversity in their sampled population, the results revealed a lack of preparedness throughout all educational stages. In another study, Wineburg and McGrew (2017) compared how 25 university undergraduates, 10 professional fact-checkers, and 10 PhD historians evaluated the credibility of online information. The findings of their experiment reveal that students and historians displayed a mediocre performance in comparison with professional fact-checkers. The researchers explain that the two first groups relied on features that could be easily manipulated, such as logos and domain names. Unlike these two groups, the last group left the original website quickly and used additional information to check its credibility. In addition, they understood how information circulates online, including the underlying logic of search mechanisms. These studies suggest that students are not necessarily prepared to evaluate the variety of online content that they consume. In addition, these studies indicate that with the proper skills, students could potentially use online tools more effectively.
It is fundamental to highlight that people might consciously share misleading pieces of information. Sometimes, recirculating a story on Twitter, commenting on a Facebook post, or giving likes on YouTube serves as an identity-signaling mechanism. People engage in these practices to showcase their perspectives and affiliation to like-minded others (Marwick, 2018). These individuals are more likely to rely on directional motivation rather than accuracy goals (Kahne & Boywer, 2017). In other words, they are motivated to justify conclusions that align with prior beliefs rather than check the correctness of the information. Social pressure, however, can affect their behavior. As a consequence, the threat of embarrassment of circulating something that others might perceive as fake might discourage them from doing so (Lazer et al., 2017). Thus, it is crucial to examine personal motivations when engaging with the news because social interactions affect how one relates to them.
Contextual Imperatives
Media affordances and news consumption habits alone cannot explain the current state of information disorder. For this reason, it is also necessary to analyze the contexts in which networked technologies are used. Misinformation, disinformation, and mal-information spread within participatory cultures, in which individuals shape media messages when they recirculate, remix, appraise, critique, and evaluate content (Jenkins et al., 2013). This cultural practice offers low barriers for individual expression and strong support for sharing other’s creations (Jenkins et al., 2009). Henry Jenkins first coined the term participatory culture to describe the activities of fan communities that differed from traditional spectatorship (Jenkins, 1992). As online participation broadened, participatory culture started to encompass the actions of different groups, including malicious ones (Marwick & Lewis, 2017). Thus, the so-called fake news phenomena emerged in an online environment where audiences were already consuming and sharing content produced by ordinary individuals and unknown sources.
Not only cultural but also economic factors contribute to the problem. Unlike traditional news outlets, social media sites and search engines do not generate their revenues by selling credible information. Instead, they use algorithms to maximize user engagement and profit from data mining and individually targeted advertisements (Lazer et al., 2018). These platforms contribute to the creation of filter bubbles (a personal ecosystem of information) when they filter and share information tailored to their users (Pariser, 2012). As a consequence, people might reinforce their beliefs in the absence of competing worldviews, which makes them more vulnerable to disinformation. In recent years, the increasing public pressure over social media enterprises made them recognize that their algorithms can amplify misleading content (Newman et al., 2017), which has been prompting ongoing reforms to their platforms.
Disinformation can be lucrative to their producers as well. After the 2016 US election, it has been reported that groups spreading false stories made substantial profits ranging from US$10,000 to US$30,000 per month during the peak of their activities (Sydell, 2016). These illegitimate enterprises use the same infrastructures as legitimate businesses, which raises questions about the transparency of these operations (Braun & Eklund, 2019). In particular, the focus on reaching customers “at the precise moment their pattern of behavior seems likely to lead to a purchase—has led advertisers to focus on short-term interactions with consumers at the expense of marketing strategies centered on long-term brand awareness” (Braun & Eklund, 2019, p. 3). In other words, these brief interactions with users open space for malicious groups to spread their messages as well.
Political and ideological imperatives play a role in the current state of information disorder in addition to economic motivations. In a report to the Data and Society Research Institute, Marwick and Lewis (2017) explain how far-right movements use forums, message boards, and social media to spread White supremacist ideas, Islamophobia, and misogyny. Similarly, Bradshaw and Howard (2017) highlight that authoritarian governments in several countries have been using social media campaigns to control their populations. The extent to which these campaigns change behaviors is still debatable. Nevertheless, it is possible to attest that individuals are more exposed to misleading content. For instance, a study conducted after the 2016 election in the United States measured the level of exposure to fake stories among 1,208 participants. It estimated that the average adult read and remembered at least one fake story (Allcott & Gentzkow, 2017). The results suggest that virtually all voters consumed some form of false information during the campaign.
Disinformation, misinformation, and mal-information can also be used for political goals other than advancing certain ideologies. Tufekci (2017) highlights that many groups spread false stories to create chaos, confusion, and apathy. This strategy responds to the decentralized nature of the Internet that prevents a single institution, group, or government from having full control over it. In this context, it is challenging to stop digital pieces of information from circulating within and across countries. Authoritarian governments and groups cannot always resort to traditional forms of censorship to suppress messages, so they sow confusion instead. “The aim of twenty-first-century powers is to break the causal chain linking information dissemination to the generation of individual will and agency, individual will and agency to protests, and protests to social movement action” (Tufekci, 2017, p. 229). Modern Russian propaganda, for instance, follows this pattern of spreading several conflicting messages to prevent audiences from becoming informed (Christopher & Matthews, 2016; Wardle & Derakhshan, 2017).
A person or group does not need to create or circulate false stories to capitalize on disinformation. Politicians in different countries have weaponized the fake news expression to discredit mainstream media and opponents (Farhall et al., 2019; McNair, 2018; Wardle & Derakhshan, 2017). Their actions increase confusion, polarization, and distrust in traditional news outlets. The Pew Research Center has been tracking America’s public priorities for more than 20 years, and their surveys highlight that the perspectives of Democrats and Republicans have never been so unaligned (Jones, 2019). Currently, there is virtually no common ground in the top priorities for both groups. Along the same lines, a report from the Reuters Institute highlights that political polarization “has encouraged the growth of partisan agendas online, which together with clickbait and various forms of misinformation is helping to further undermine trust in media” (Newman et al., 2018, p. 6). Their findings reveal that citizens from countries with a polarized political climate tend to display higher levels of concern about being ill-informed. For instance, 67% of respondents agreed with a statement that they worry about false information on the Internet in the United States. The numbers were also high for places like the United Kingdom (70%) and Brazil (85%) and much lower in Germany (38%) or the Netherlands (31%). Thus, polarization, distrust, and information disorder reinforce each other.
In summary, media affordances and their uses, as well as contextual factors, contribute to disinformation in the digital age. The issue is multidimensional, and shifts in any one of these factors can alter the entire landscape of information disorder, which brings challenges for educators. Thus, the next section describes Stuart Selber’s framework that can be used to teach students to be well-informed in a context of complex and fast-paced changes.
Multiliteracies Framework
Educators and researchers of the New London Group coined the term multiliteracies in the 1990s. They developed this pedagogical perspective as a response to the educational needs of individuals in technologically rich societies (Cope & Kalantzis, 2005). One of its members, Stuart Selber, proposed an approach that encompasses functional, critical, and rhetorical literacies for the digital age. In Multiliteracies for a Digital Age, he offers a heuristic framework that combines theory with practice and avoids prescriptive rules. As a consequence, his guidelines can be adapted to address current challenges. According to Selber (2004), artifacts embody design decisions, and their uses are always influenced by contextual factors, such as economic and political imperatives. Selber does not view technologies as neutral or autonomous forces. For this reason, he develops multiliteracies as a humanistic project and not as a mere technical endeavor.
His humanistic view becomes evident in discussions about functional literacy. This dimension can sometimes be restricted to unreflective skill-based instruction. Critics often argue that functional literacy focuses on shallow learning goals at the expense of critical thinking. Lessons on how to share content on social media could fall under this description. Selber (2004), however, adopts a broader perspective that encompasses social conventions and specialized discourses. As mentioned before, he understands technologies as artifacts embedded in the fabric of social life. Therefore, using them requires an understanding of socially constructed norms that shape communication in different contexts. For example, analyzing underlying groups’ conventions on media platforms could fit this dimension of functional literacy. According to him, this approach can empower students to achieve educational goals, manage activities, and circumvent impasses.
Selber (2004) presents the need for critical thinking as a logical extension of functional skills. “Rather than setting up a critical computer literacy as necessarily oppositional to either functional approaches or, in fact, to students’ own goals, Selber designs an integrated educational model of analysis and critique that avoids unproductive binaries” (Moore, 2005, p. 427). Students must see digital tools as cultural artifacts to become informed questioners of technology. To reach this goal, they need to pay attention to dominant perspectives that shape design and technological cultures, and understand the intrinsic relation between computing infrastructures and contextual factors, such as political, economic, and educational imperatives. A practical example of this critical approach would be to evaluate the underlying rationale shaping algorithms on different social media sites.
Rhetorical literacy bridges the functional and rhetorical dimensions because it positions students as producers of media content. To be reflective creators, they need to employ functional and critical thinking skills. This dimension frames computers as hypertextual media and challenges individuals to understand how the very design of platforms also composes the texts that individuals produce online: Anyone who has been overwhelmed by the sheer volume of information on the Internet knows that the metatext—a heavily linked text that connects other texts and their contexts in imaginative and meaningful ways—has become an invaluable online genre, one that requires in its construction a sophisticated knowledge of audience, purpose, context, and the various organizational schemes that hypertext can support. (Selber, 2004, p. 136)
The framework focuses on the hyperlinked nature of online texts and prompts students to explore the affordances of digital media. As the Internet evolves, multiliteracies should encompass other aspects as well, such as the algorithmic influence over the circulation of information.
Selber’s (2004) model allows for these types of adaptations because he does not propose a set of fixed rules. He is aware that multiliteracies require systemic updating, so the last chapter of his book highlights how different conceptualizations of change yield varied appropriations of his framework. Selber (2004) argues against a view that positions technological shifts as the primary source of social transformation. “The impulse to envision computers as autonomous agents of change is understandable on some level, in part because educators tend to be hopeful and overly optimistic professionals” (p. 189). Instead, he relies on systems theories to argue that transitions unfold over time because of several loosely coordinated elements. Thus, educators should consider the intersection of technological, pedagogical, and institutional shifts when proposing curricular updates.
Selber (2004) delineates a literacy theory and not a framework for exploring change in the digital age. That said, his ideas align with some theories of technology and society. His complex view of tools and their uses, as well as his attention to media affordances, overlaps with socio-technical approaches articulated by critical and cultural studies scholars. For instance, Barney (2004) argues that to investigate technologies’ role in a society, one needs to consider how individuals use them; the social, political, and economic context in which they are situated; and their intrinsic and particular design characteristics. This perspective avoids technological and social deterministic views. These two positions frame change in terms of cause and effect (Slack & Wise, 2005). The first one positions technology as the driving force of transformations in society; the second understands that humans collectively determine the course of change. Unlike these approaches, Selber’s (2004) ideas imply that humans and technologies are co-constituted.
In synthesis, this multiliteracies framework can not only orient curriculum development but also guide analysis of how digital technologies shape societies and vice versa. The next section describes a set of skills and knowledge that learners need to navigate the problem of information disorder in the digital age.
Multiliterate Individuals in an Age of Fast-Paced Change
Taken together, the multiliteracies approach, as well as the different elements discussed above, provides foundations to articulate competencies to grasp and deal with information disorder. I argue that to adapt and thrive in a scenario of fast-paced change, students need to know how to identify misleading pieces of information while also understanding the context that favors their spreading. Relying on Selber’s (2004) ideas, I discuss how students can effectively use digital tools to become well-informed, thoughtfully question technology, and ethically engage in civic dialogues (see Table 1). This approach gives them a framework to understand which elements favor information disorder and allows them to adapt their analytical abilities over time.
Competencies for Multiliterate Individuals.
Functional literacy involves not only the technical aspects of technologies but also social conventions and specialized discourses surrounding their uses (Selber, 2004). Thus, students need to understand how social relations can impact the circulation of false stories. Research reveals that individuals might share information to signal their values, identity, and group affiliation (Lazer et al., 2017; Marwick, 2018). Reflective on how social dynamics affect news consumption habits can heighten learners’ awareness of malpractices. They also need to understand the different genres of misleading stories. Not all malicious content has the same structure or serves the same purposes. For instance, satirical accounts are meant to entertain, while conspiracy theories appeal to people’s fears by rejecting experts’ perspectives. Hobbs (2017) argues that learners can benefit from having more precise terms to discuss disinformation, such as propaganda, click baits, and hoaxes. This nuanced view can allow them to differentiate bias from fabrication and evaluate news that they are consuming.
The process of evaluating media messages also involves looking beyond the content at hand. Wineburg and McGrew (2017) explain that fact-checkers in their experiment relied on other sources to assess the trustworthiness of the information. This approach, called lateral reading, requires the right techniques to produce satisfactory results. In the same study, PhD historians tried to find the original source for the messages that they were reading and, overall, did not reach any conclusive verdict. For this reason, it is fundamental to formulate helpful questions to guide lateral reading. Bayer (2016) challenges individuals to ask, What is the source of this story, and what do I know about it? If this story were true, what else would be true? For instance, a fake piece of information circulated in 2016 attesting that Pope Francis endorsed Donald Trump for president. Learners could quickly check the credibility of the website by asking themselves what else would be true if the Catholic leader supported Trump. A political event of this proportion would have been covered by media outlets all over the world. With these ideas in mind, a quick Internet search would reveal that the story was fabricated. In synthesis, empowering students to sort fact from fiction can make them more effective users of digital networks.
In addition to mastering these functional literacy competencies, individuals need to be informed questioners of technology. Selber (2004) stresses that the critical dimension of his framework examines how computing affordances and contextual factors intersect. As discussed in the previous section, the current information disorder problem has multiple facets. Misleading stories spread in a cultural environment in which individuals share and remix media and consume content from non-reputable sources. These interactions take place in platforms that profit from people’s engagement regardless of their nature. Digital technologies and networks broaden the scope that fabricated stories can reach, and algorithms influence the visibility of online information. This scenario offers favorable conditions for politically motivated groups to spread misleading content and generate chaos, confusion, and apathy. Having an understanding of these different layers can benefit students. For instance, knowing the difference between the business model of social networking sites and traditional media outlets can make them more selective news consumers. Being aware of echo-chambers and filter bubbles can encourage them to actively expand their horizons.
It is common to fall into deterministic traps when analyzing the intersections between technology and social life. According to Slack and Wise (2005), individuals are culturally conditioned to frame problems as cause–effect relations. For this reason, instructors should also promote reflections about the role of digital tools in societies. In particular, it is fundamental to highlight traditional social dimensions—economic, cultural, political—and also to consider the affordances of technologies. Without this holistic perspective, on the one hand, there is the possibility of focusing just on the people’s actions while downplaying the role of tools. On the other, there is the risk of assuming that the advent of digital networks created the so-called fake news phenomenon. Both perspectives offer shallow approaches to the topic. Socio-technical theories of technology and society can avoid oversimplifications and guide in-class discussions.
Multiliteracies prompt learners to apply their functional and critical competencies toward becoming reflective producers of media. This step is particularly crucial because it empowers students to join civic dialogues actively. Selber’s (2004) original framework highlights the need for mastering textual and audiovisual techniques of online production, which resonates with the work of other educators (Friesem, 2019; Hobbs, 2017). He also challenges students to see the relationship between their texts and the hypertextual nature of the Internet. Here, I argue that students should take into account the role of bots and algorithms in shaping online communication. In particular, I corroborate Klawitter and Hargittai’s (2018) assertion that learners need to develop algorithmic skills or “users’ knowledge about algorithms and their role in making online content visible, as well as users’ ability to figure out how particular algorithms work, and then leverage that knowledge when producing and sharing content” (p. 3492). Granted, while fully knowing how bots and algorithms work would require sophisticated computer science skills and access to proprietary computing codes, such knowledge is not essential. For the average consumer of online news content, the simple understanding that they exist and that they influence the visibility of information can promote a more mindful engagement with media messages. This skillset is particularly crucial given the fact that most publishing platforms and search engines use techniques to filter content.
As mentioned before, the boundaries between media producers and audiences are blurred in the digital age. In participatory online spaces, individuals can recirculate, remix, appraise, critique, and evaluate content (Jenkins et al., 2013). Individuals might circulate false stories to signal their values, identities, and belonging to certain groups (Lazer et al., 2017; Marwick, 2018). Therefore, the simple act of sharing and commenting can be considered a form of co-creation, even when they are not producing content per se: While most Internet users do not post YouTube videos or political blog posts (although many do), a huge number take part in lower-overhead online activities, such as liking a Facebook post, reblogging, retweeting, or commenting on a news story, or wading into a discussion war on someone else’s Facebook or Instagram account. (Marwick, 2018, p. 503)
As a consequence, it is fundamental to discuss with students the ethical implications of being part of this type of environment. Mihailidis and Viotty (2017) highlight the importance of being responsible consumers of media. Similarly, Jenkins et al. (2009) point out that a key goal of media literacy education is to “encourage young people to become more reflective about the ethical choices they make as participants and communicators and the impact they have on others” (p. 19). To reaffirm what I discussed earlier, individuals might spread certain types of content to signal their affiliation to like-minded others rather than to engage in civic debates (Lazer et al., 2017; Marwick, 2018). In these types of interactions, learners might have more success asking questions that prompt reflection rather than trying to persuade others with a list of facts. They can pose the same questions used to evaluate the trustworthiness of information, such as “What is the source of this story? If this story were true, what else would be true?” (Bayer, 2016). Using this strategy, learners can also choose to directly debate ideas only with individuals who are open to these types of dialogues.
The landscape of disinformation disorder will shift as technology evolves, communities create new cultures within digital spaces, platforms change their business models, groups find new ways to bypass civic debates, and so on. Thus, instructors also need to instill in students a disposition for being lifelong learners. Fact-checking lists become quickly outdated and do not allow individuals to adapt. Instead, the competencies delineated in this section give learners a foundation to build upon because it teaches the underlying logic of information disorder in the Internet age. Granted, instructors cannot assure that students will adapt to transformations in society; however, they can share knowledge and teach skills that allow individuals to keep learning. In doing so, they empower them to deal with disinformation, misinformation, and mal-information.
Conclusion
This article combined a multiliteracies framework (Selber, 2004) with an analysis of the factors that contribute to information disorder to propose a set of competencies for students to thrive in the digital age. I argue that to adapt to the fast-paced changes surrounding the so-called fake news phenomenon, students need to know how to identify misleading stories and also understand the factors that contribute to their circulation. This approach provides them with an understanding of how the landscape of information disorder evolves. This approach resonates with scholarship on media and information literacies. In particular, it emphasizes responsible engagement instead of just the evaluation of news sources (Bulger & Davison, 2018; Mihailidis & Viotty, 2017). It also encourages an interdisciplinary approach to the topic (Bulger & Davison, 2018).
It is essential to highlight that multiliteracies alone cannot fully solve the issue of disinformation disorder. While my approach does not directly address the challenge of distrust in traditional media outlets, journalistic malpractices, or inflammatory political discourse, improving students’ evaluative competencies can enhance their ability to choose the news that they consume. The complexity of the fake news phenomenon calls for multiple intervention actions; nevertheless, having individuals able to assess the quality of online information can mitigate its consequences. In particular, engaging in dialogue is one of the pillars of civic engagement, and it requires well-informed citizens (Gordon et al., 2013). Youth tend to consume their news through social media platforms, an environment where fabricated stories circulate abundantly. As discussed above, their time in front of the screens does not necessarily improve their ability to become well-informed. Thus, the proposed framework empowers them to use the Internet and social media in adaptive and productive ways.
This multiliteracies framework for combating disinformation disorder has pedagogical and theoretical implications. The suggestions proposed here can be used in individual courses or across the curriculum. The framework can also foster collaborations across departments and unities within universities, such as libraries and digital production centers. For instance, a course on algorithms can bring together elements of information, social, and computer sciences. This approach also broadens the concept of information disorder. Instead of framing the problem just as a message transmission issue, it contextualizes the communication of misleading stories within a set of techno-cultural, economic, and political relations that shift over time. This theoretical move resonates with the work of other scholars (Marwick, 2018; Wardle & Derakhshan, 2017), and it expands the object of study for researchers interested in the topic.
Following this broader view of information disorder, future research can further explore how algorithms and bots shape online discourses and the implications for individuals as well as institutions (social media sites, news outlets, etc.). Algorithms are one of the opaquest elements of networked public spheres, and there is a need for more research to understand their impacts. As social media sites start to adopt strategies to combat information disorder, it is fundamental to know how their actions affect users as well as civic debates and dialogue. In particular, it might be essential to monitor how malicious groups adapt their strategies over time. Also, studies can explore the outcomes of this multiliteracies framework and other pedagogical approaches. In synthesis, this work underscores the need to teach students competencies that will allow them to adapt their evaluative abilities in the face of changes in the landscape of information disorder. Students need to effectively identify misleading stories and thoughtfully question the role of technology in society so that they can ethically participate in civic debate and dialogue.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
