Abstract
How should scholars make sense of the rapid growth of generative artificial intelligence in media work? In this commentary, we argue that researchers can begin by stepping outside of their intellectual silos to see how the challenges and opportunities posed by generative AI are commonly shared across the media industries. We focus on three primary mass communication domains—advertising, journalism, and public relations—to illustrate how media professionals across these fields are adopting similar AI technologies (e.g., machine learning, natural language processing, and recommender systems) for often similar purposes (e.g., content creation, audience engagement, and business operations). Even more, the uptake of AI has profound consequences—for ethical norms as well as roles and relationships of humans and machines—that may be best understood across media industries more so than within them in isolation. Ultimately, a more cross-industry approach to scholarship could develop a more encompassing picture about AI's impact on media work and media consumption.
The introduction of generative artificial intelligence has led to a watershed moment across a range of professional fields, including and perhaps especially in communication and media industries. It's true that experimentation with artificial intelligence has been happening in public relations for more than a decade (Swiatek & Galloway, 2023), and the presence and use of AI has been growing steadily within advertising (Liu & Yao, 2023) and journalism (Beckett, 2019) during the same time, but the integration of AI into these industries has risen sharply since the launch of OpenAI's ChatGPT in November 2022. In contrast to narrow AI applications designated to carry out a specific task, generative AI tools mark a step-change that seems particularly disruptive to media enterprises because they can create content—text, images, audio, video, code, and more—that seem uncannily humanlike. Whether it's conversational chatbots like ChatGPT or Claude that rely on large language models to answer questions and generate prose, or text-to-image generators such as Midjourney or Stable Diffusion that allow users to generate high-quality visuals from simple prompts, these and other generative AI tools signal a significant advance in the ability of machines to enter into roles previously associated with humans in communication and media work, even when compared to the early days of the internet and its impact on the creative industries (Deuze, 2007). Industry reports and academic research now document widespread and growing use of generative artificial intelligence and other AI applications across the media landscape (e.g., Diakopoulos et al., 2024; Huh et al., 2023; McCorkindale, 2024). Scholarly study of the use of AI and its impact for individuals (including professionals and consumers), organizations, and society has developed along with the incorporation of AI into media work; notably, however, research regarding AI has progressed further in some fields, such as journalism studies, more so than others (e.g., see de-Lima-Santos & Ceron, 2022; Deuze & Beckett, 2022; Jamil, 2021; Lewis & Simon, 2023; Pavlik, 2023; Simon, 2024; Zerfass et al., 2020). Given the rapid growth and widespread adoption of generative artificial intelligence in only a few short years (Fui-Hoon Nah et al., 2023; Mollick, 2024), practitioners and scholars of journalism, public relations, advertising, entertainment, and related fields have stressed the need to bolster an understanding of how generative AI along with previously existing forms of AI are transforming media industries and the far-reaching implications of these changes.
As scholars who have focused our own research on the integration of digital technologies and artificial intelligence within journalism (e.g., Lewis et al., 2019), we echo that the study of AI in all its forms, its uses, and its implications within specific industries is indeed important. However, as we will make the case in this commentary, further research also is needed to investigate the implications of AI across media industries. Media practitioners in different industries often employ the same technologies—including machine learning, machine vision, translation, transcription, natural language generation and natural language processing, social listening, and recommender systems—for similar purposes, such as gathering and processing information, content creation, product dissemination, audience engagement, and supporting business functions (see Chan-Olmsted, 2019). More critically, by their nature and in their use, these AI applications are “disruptive” (C. A. Lin, 2019) and “transformational” (Chan-Olmsted, 2019), upending fundamental elements of media production, distribution, and consumption, from how work is performed to the relationships between and among professionals and their audiences to the ethics guiding practitioners to the very identity of each profession and its workers. Such disruption is not happening in a single media field in isolation; rather, it is occurring within and across the media and creative industries as a whole (Amankwah-Amoah et al., 2024).
The AI technologies within media fall under the umbrella of what we have termed “communicative AI” and are part of the collective remaking of the roles and relationships of people and technology in a new paradigm of human-machine communication (Guzman & Lewis, 2020). There is, in fact, a much larger societal shift unfolding around the world in the way humans communicate and interact with and through technology. Even if the media industries themselves are but a small part of this broader transformation, they are nevertheless a vital and important part, particularly given the role that media—from news to advertising to entertainment—play in shaping how people come to understand the world and their place in it. These industries, their workers, and the messages they produce, and the audiences that consume those messages, have never existed in isolation (Lewis & Westlund, 2015). Rather, each relies on the other, and it is through comparison and contrast—between, say, what counts as news and what is deemed opinion or entertainment—that a sense of identity, norms, and boundaries are developed (Schudson, 2001). It is at this messy confluence in the way media messages are made, circulated, and received, for example, that professional authority is established as a distinctly relational achievement for media workers like journalists (Carlson, 2017). In the past three decades, however, a series of technological innovations—from the Web to mobile phones to social media—have blurred these boundaries as never before (Boczkowski, 2021), and now AI seems to be remaking them yet again. Thus, there is a need to more fully understand a media ecosystem comprising multiple industries that are increasingly adopting artificial intelligence and simultaneously becoming transformed by it. Below we highlight some of the shared challenges and implications of AI across industries, focusing on journalism, advertising, and public relations, and we offer the example of deepfakes to demonstrate the need for more robust inter-industry dialog and research.
While crucial differences exist among these three media industries, what they share is an orientation toward creating content for and building relationships with key stakeholders such as audiences, customers, and clients. The allure of AI is the ability for practitioners to carry out the work of journalism, advertising, or public relations more efficiently. With any new technology there are questions about the tradeoffs between efficiency and quality, and currently media professionals are weighing those for AI (Amankwah-Amoah et al., 2024; Chan-Olmsted, 2019; Diakopoulos et al., 2024). Where communicative AI differs fundamentally from predecessor technologies is in its ability to mediate and create messages, stepping into creator roles in which the machine has greater agency within the communication process (Gunkel, 2012; Guzman & Lewis, 2020; Hepp et al., 2023). The efficiency question centers around what AI can perform on behalf of human media workers as well as what media workers can accomplish in conjunction with AI (Chan-Olmsted, 2019; Diakopoulos et al., 2024).
The challenge that arises is how best to realign the roles and relationships between people and technology in media work given the new capabilities of AI (Chan-Olmsted, 2019; Lewis et al., 2019; Lewis & Simon, 2023; Rodgers, 2021). What functions should be handed over partially or completely to AI? What level of human supervision should exist to manage the use of AI? The audience-facing uses of AI are of particular concern because at stake is not only the quality of the media content but also the brand and reputation of the organization or its clients (Galloway & Swiatek, 2018). For example, news organizations have experimented with chatbots as a novel way to deliver content and to strengthen the audience's connection to the news provider (e.g., Ford & Hutchinson, 2019). Public relations and advertising practitioners similarly use chatbots or AI influencers as points of contact with the audience (Galloway & Swiatek, 2018; Huh et al., 2023). Across all three industries, artificial intelligence serves as a type of brand representative, mediating the relationship between the organization and its audience (Ford & Hutchinson, 2019; Liu & Yao, 2023; Oh & Ki, 2024). Beyond brand reputation, however, there are larger questions about the implications of disrupting relationships that once existed almost exclusively among humans, such as practitioner–practitioner, practitioner–client, practitioner–audience. These relationships are fundamental not only to the operations of media organizations but also to the identity of media professionals (Lewis et al., 2019; Lewis & Simon, 2023; Oh & Ki, 2024).
Artificial intelligence also poses a challenge to communication and media ethics (Gunkel, 2012) that is felt within and across industries. Ethical codes of journalism and other media professions have been established on the ontological assumption that it is humans who carry out media work, but the capabilities of AI technologies undermine this assumption and, thus, the ethical norms built upon it (Guzman, 2021). Media professionals and scholars have identified ethics as a critical consideration in the adoption and use of artificial intelligence within advertising (Chuan et al., 2023; Huh et al., 2023), journalism (Dörr & Hollnbuchner, 2017; Lin & Lewis, 2022; Montal & Reich, 2017), and public relations (Swiatek & Galloway, 2023; Valin & Gregory, 2020). Although they manifest themselves in different ways, many of the same ethical issues are faced across industries, including the accuracy and veracity of AI-generated content (Amankwah-Amoah et al., 2024; Simon & Isaza-Ibarra, 2023; Valin & Gregory, 2020), the need for transparency around the disclosure of AI use (Montal & Reich, 2017; Rodgers, 2021; Valin & Gregory, 2020), the danger of bias and discrimination in content and distribution (Diakopoulos et al., 2024; Rodgers, 2021; Swiatek & Galloway, 2023), and privacy and surveillance of audiences (Chuan et al., 2023; Dörr & Hollnbuchner, 2017; Valin & Gregory, 2020). Within the literature, these ethical concerns are most often discussed in terms of content being delivered to the consumer. What is less acknowledged but critically important are the ethics of how professionals from different industries should engage with one another around content created and disseminated by artificial intelligence.
The rise in disinformation and deepfakes generated and distributed using AI underscores this point. The challenge is not limited to a single industry (Karinshak & Jin, 2023), and scholars and practitioners across journalism, advertising, and public relations have identified deepfakes and disinformation as threats to their professions, to the audience and consumers, and to society. The challenge can be looked at from two angles. First, there is the creation of deepfakes and disinformation by workers in one industry to manipulate workers in another, as in the creation of deepfakes by a publicist to garner news attention. However, the more pronounced concern lies with parties outside legitimate media organizations, such as individuals or groups that spread political propaganda that makes its way into mainstream news (Woolley, 2023), attack a company or an individual to harm their reputation (Karinshak & Jin, 2023), or create false endorsements in advertising for financial gain (Kietzmann et al., 2021). A single deepfake or message of disinformation could affect professionals across industries. For example, if a news organization were to publish false information about a person or company based on disinformation circulating via social media, the event might create a crisis that threatens the reputation of the company or person, which would need to be redressed by public relations professionals. At the same time, the crisis might undermine the credibility of the news organization, thus requiring journalists to engage in damage control and potentially leading the public to call for advertisers to boycott the news outlet in response. The fast-growing scale and scope of false information affects more than the media workers who have to wade through it and rectify its consequences; the legitimacy of entire industries can be undermined as the veracity of media content becomes murkier and truth and fiction become harder to discern (Woolley, 2023).
The implications of disinformation and deepfakes also reach beyond media professions to their audiences and society. Noting these and other concerns regarding the integration of AI into media, scholars and professionals in advertising, journalism, and public relations have separately articulated the need for the expanded education of audiences in the form of general media or AI literacy (e.g., Chuan et al., 2023; Deuze & Beckett, 2022; Huh et al., 2023; Karinshak & Jin, 2023; Rodgers, 2021). Indeed, the consequences of automation for communication (Hepp et al., 2023) and communicative labor (Reeves, 2016), both for individuals and society, offer a compelling reason to examine AI across industries. Each industry and the messages it produces are part of a much larger media ecosphere, or what Jungherr and Schroeder (2023) call the “public arena.” Audience members do not encounter news stories, advertisements, or branded content in isolation; the products of media industries are part of a jumble served alongside other messages produced by still more parties on people's social media feeds or other points of media consumption. Altogether, this constitutes a cacophonous world of information abundance (Boczkowski, 2021)—one now made more complicated by the infusion of AI-generated content that may begin to pervade the public arena (Jungherr & Schroeder, 2023). The examination of individual media industries and their content enables the understanding of individual instances of the implications of AI for audiences, but such an approach cannot get at the totality of what is transpiring. One of the problems of contemporary media research in an age of AI is that it proceeds from assumptions about older forms of media that were once more distinct and contained, and it has not adapted to new message flows and new players pervading and reshaping the media ecosystem, including autonomous technologies (Woolley, 2023).
As such, our concluding call is for scholars of advertising, journalism, and public relations—and the field of communication writ large—to break out of the intellectual silos that limit our ability to see the big picture when it comes to AI, both generally and specifically in this emergent moment of generative AI. Researchers in these fields (ourselves included, we acknowledge) are too accustomed to formulating concepts and methods as if social life, technology innovations, and everyday interactions stayed neatly within containers such as “journalism” or “strategic communication.” Those clean divisions, reified in our subgroupings within academic associations and journals, belie the true messiness that exists in the way audiences encounter and engage with media and information (Boczkowski, 2021). They may become doubly problematic at a time when AI is also upending familiar ideas about humans and machines and communication in the public arena. While scholars may prefer to “stay in their lane” and study media professions in isolation, better ways forward that can grasp the complexity across media are needed. The practical, strategic, and ethical challenges posed by AI, we have argued here, are broadly shared across industries, their workers, and their audiences. Thus, embracing a big-picture approach—one that seeks to identify common cases, problems, and solutions—could yield a more encompassing and accurate portrayal of media work and media consumption in the era of AI. Even more, such an approach to scholarship could make a valuable contribution to pedagogy and training, helping educators work more collaboratively across the usual divisions between journalism and strategic communication to confront the common opportunities and challenges of AI for the future of the media industries.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
