Abstract
This invited essay discusses a recent special issue published by the journal Human Communication Research entitled “Rethinking Communication in the Era of Artificial Intelligence.” Articles in the special issue explore how: (a) artificial intelligence (AI) may be altering basic communication processes, (b) communication theories can help us understand and explain AI's potential impacts, and (c) research can explore how AI can help promote the public interest. These issues are explored in varied contexts (e.g., close relationships, workplaces, media, and politics) using quantitative, computational, and qualitative methods. My essay highlights the special issue's purpose, describes the peer review process that led to six articles plus the opening essay by guest editors S. Shyam Sunder and Eun-Ju Lee, features an exemplar article that explores how social bots helped amplify partisan news coverage in the Unites States during the COVID-19 pandemic, and calls for communication scholars to publish additional research on these topics in outlets such as Emerging Media.
It is an honor to contribute to this inaugural issue of Emerging Media: Technology, Industry and Society. I currently serve, along with my colleague Professor Yariv Tsfati (University of Haifa, Israel), as co-editor in chief of Human Communication Research (HCR), an official journal of the International Communication Association. HCR's primary mission is to publish high-quality social-science research that advances communication theory (see https://academic.oup.com/hcr). Among our responsibilities as editors, one of the most rewarding is the opportunity to invite leading scholars to edit special issues once per year. Recently, we were fortunate to have Professors S. Shyam Sunder (Penn State University, USA) and Eun-Ju Lee (Seoul National University, S. Korea) guest edit an HCR special issue entitled “Rethinking Communication in the Era of Artificial Intelligence.” Given that the special issue on artificial intelligence (AI) dovetails nicely with the mission of Emerging Media, my invited essay briefly summarizes the issue's content and contribution. I want to stress that all credit for the special issue goes to Professors Sundar and Lee as well as the contributors, as they are the ones who produced this important body of intellectual work.
HCR's special issue on AI
The special issue on AI, which appeared in the July 2022 issue of HCR (Vol. 48, issue 3), contains seven empirical studies and conceptual essays. The special issue process started in October 2020, when Professors Lee and Sundar issued a call for extended abstracts of no more than 1000 words. They asked scholars to address issues such as how: (a) AI might alter basic communication processes; (b) communication theories can help understand, predict, and explain the potential impact of AI; and (c) research can inform the development of AI applications and services that promote the public interest. More than 50 extended abstracts were submitted by the January 2021 deadline. Professors Sundar and Lee selected about a dozen authors to submit complete manuscripts, which were then peer reviewed. The editors finally accepted six articles and authored an opening essay for the special issue (Sundar and Lee, 2022).
The special issue features work on AI in a variety of contexts. Topics involve persuasion (Dehnert and Mongeau, 2022), personal relationships (Brandtzaeg et al., 2022), and organizational communication (Endacott and Leonardi, 2022; Laapotti and Raappana, 2022) as well as partisan news coverage about health topics (Duan et al., 2022). A variety of methods were employed in these studies such as experiments (Banas et al., 2022), computational methods (Duan et al., 2022), and ethnographic interviewing and observation (Endacott and Leonardi, 2022). The special issue has received widespread attention; for example, as of March 15, 2023, the article by Brandtzaeg et al. (2022) entitled “My AI friend: How users of a social chatbot understand their human-AI friendship” had received more than 6600 views. Regarding the implications of their special issue for communication theory, Sundar and Lee (2022) conclude: With rapid and seemingly fundamental changes in how communication is performed, it is imperative for communication scholars to critically evaluate the relevance and utility of existing theories and research findings and propose new ones, as needed. (p. 383)
Algorithmic agents in the hybrid media system: An exemplar study
To offer an exemplar, I briefly summarize one of the special issue articles by Duan et al. (2022) entitled “Algorithmic Agents in the Hybrid Media System: Social Bots, Selective Amplification and Partisan News about COVID-19.” The authors investigate the impact of social bots on attentional dynamics in the hybrid United States media system. In the article, they define social bots as “algorithmic agents that amplify certain viewpoints and interact with selected actors on social media” (Duan et al., 2022: 516). Regarding hybridity, they argue that rather than trying to distinguish older and newer media, we should look at the interdependence between them. Their research questions include (a) what percentage of tweets about COVID in the first few months of the pandemic were generated by bot-like accounts; (b) how the topics of Tweets from bots compared to those from human tweets as well as those covered in partisan and mainstream US news media coverage of the pandemic; and (c) how the topics in tweets from bots, humans, and partisan/mainstream news stories were reciprocally related over time.
To address those questions, the authors analyze two kinds of data. They scraped more than 1,600,000 tweets about COVID-19 from March 1 to May 31, 2020. They also sampled more than 50,000 news stories from six United States media outlets which, according to their categorizations, are conservative outlets (Fox News and Breitbart), center-left or “mainstream” outlets (New York Times and Washington Post), and liberal outlets (MSNBC and HuffPost). To analyze these data, they used Botometer 4, which is a machine learning system for bot detection. They examined more than 1000 features to determine the likelihood that a Twitter account was a bot account and used this method to identify what percentage of the tweets were coming from bots. They then used structural topic modeling to identify frequently occurring topics in both tweets and new stories about COVID-19. Time series analysis was performed to see if an increase in the number of tweets and stories on a specific topic from one source (e.g., social bots) on a particular day predicted an increase in the number of tweets or stories about those same topics from another source (e.g., humans or the news media) on the following day or days.
Key findings from Duan et al. (2022) include that 9% of Tweets about COVID-19 were from bot-like accounts. The authors group topics of Tweets and news stories into three large categories: daily life, political/societal topics, and public health. They found that Twitter bots emphasized political/societal topics more than human Twitter accounts; for example, Twitter bots were more likely than humans to blame China for the pandemic or highlight former President Trump's failure in responding to the pandemic. As these examples suggest, Twitter bots typically were tweeting about divisive issues.
Regarding bi-directional relationships, Duan et al. (2022) found that the more that people tweeted about daily life topics (e.g., panic shopping) as well as political/societal topics on one day, the more that Twitter bots would do so on the next day. In other words, Twitter bots were amplifying tweets by people but not the other way around. The story was different, however, for partisan news sources. When Twitter bots tweeted about a topic that was damaging to the Trump administration on one day, partisan liberal media outlets caried more stories on that same topic over the following days. Likewise, when Twitter bots tweeted on a topic that fit a conservative agenda (e.g., complaints about masking) on one day, conservative partisan media carried more news stories on that same topic in the following days. In contrast, Twitter bots did not predict the story topics that mainstream media sources discussed. In sum, Twitter bots played a unique role in amplifying partisan media coverage.
In terms of implications for communication theories, Duan et al. (2022) argue that conceptualizations of hybrid media systems need to take account of nonhuman as well as human actors. Rather than treating social bots as “noise,” scholars should explore their role in hybrid media systems including in countries beyond the United States since that was the primary focus of Duan et al.'s article. There are also larger implications for theories from agenda setting to the classic two-step flow hypothesis as well as for newer work on online influencers and networked influence. I also see connections between Duan et al.'s findings and my own research about difficult family conversations (see Wilson and Caughlin, 2017).
Bots, hybrid media systems, and family conversations about COVID-19
My colleagues and I currently are exploring how, during the summer of 2021 after COVID vaccines had rolled out in the United States, vaccinated Americans were talking with hesitant members of their own family about why they should also get vaccinated for COVID-19 (Wilson et al., 2023). These conversations can be challenging, especially when family members hold different political beliefs and may feel that their core social identity is being attacked (Soliz and Colaner, 2017). Research already has demonstrated how media coverage of the COVID-19 pandemic has spurred family conversations. Wagner and Reifegerste (2021) found that German adults reported talking frequently about COVID-19 media coverage early during the pandemic with both strong (e.g., family) and weak (e.g., neighbors) ties. Those authors found that families talked about media coverage to: (a) acquire and share information, (b) validate information, and (c) cope with negative emotions. In other words, families were discussing what they were learning about COVID via the media, as well as consulting media to verify what family had told them about the pandemic. One thing I began thinking about after reading Duan et al.'s (2022) article was how social bots might be affecting this process. We know that family members who lean conservative versus liberal have drawn discrepant conclusions about COVID based on their exposure to different media systems (Pasquini and Saks, 2022). Thus, we need to explore not only how bots can amplify the partisan news divide, but also how this might amplify partisan divides playing out in family conversations about topics like COVID-19.
Conclusion
Like Sundar and Lee (2022), I believe the articles in HCR's special issue on AI, “and the many more that they inspire, will reshape our scholarship to better reflect an increasingly AI-driven world” (p. 384). Given Emerging Media's mission of publishing “interdisciplinary research articles and leading trends discussions” on topics like AI, it is my hope that the special issue will help stimulate future submissions to this important new journal.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
