Abstract
The arrival of ChatGPT and other artificial intelligence (AI) writers has captured the popular imagination, but also raised grave concerns. What are the implications of the widespread deployment of such content generation technologies? How should we, as communication scholars, think about and study AI writing tools? We discuss these questions by reflecting on research highlighting the psychological effects of AI as a source of communication. We identify key future research directions, including a redefinition of concepts like creativity, addressing major weaknesses of AI writers, and motivating design of better AI tools with an eye toward reclaiming human agency in the post-ChatGPT era.
Keywords
The media hype and public fascination with the launch of ChatGPT by Open artificial intelligence (AI) in December 2022 signal an existential crisis that strikes at the heart of journalism and communication: If a machine can write coherently in response to a question, where does that leave legions of humans who earn a living by writing and communicating? If communication is a defining attribute of humanness, has ChatGPT crossed the line, making machines indistinguishable from humans?
Similar questions were already asked a decade earlier when smart speakers like Amazon Alexa and virtual mobile assistants like Apple Siri responded conversationally to people’s queries. In fact, such questions were asked as far back as in the 1960s, with the launch of Eliza, a simple computer program that mimicked interactions with a psychotherapist; many people were captivated by it to the point of disclosing intimate details about themselves and holding long conversations. Eliza’s creator became concerned that users were making unwarranted attributions of human-like abilities, such as understanding and empathy, to computers (Weizenbaum, 1976). This human tendency became known as the “Eliza Effect,” referring to “the susceptibility of humans to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers” (Hofstadter, 1995, p. 158). In a way, this susceptibility reduces the burden on computer programs that aspire to pass the Turing test because it does not take much for humans to treat machines as if they are sentient beings. 1
Nowhere is this more evident than in the media realm, as demonstrated powerfully by the “Computers are Social Actors” (CASA) research program in the communication field. A series of experiments in the 1990s showed that people apply the same social rules of human–human interaction to human–computer interaction (HCI) (Reeves & Nass, 1996). The more information a computer discloses about itself, the more people reciprocate by revealing intimate details about themselves (Moon, 2000). When they evaluate a computer’s performance on the same computer, they tend to be more polite than when they evaluate its performance on a different computer (Nass et al., 1999). They even apply gender stereotypes by rating computers with men’s voices as more knowledgeable about the topic of computers and rating computers with women’s voices as more knowledgeable about love and relationships (Nass et al., 1997).
Several streams of research converge around the idea that human psychology is at the heart of our collective response to advancements in AI technology. The fact that we humans tend to mindlessly invoke the social script of human–human communication when interacting with computers suggests that we perceive computers as being more than simply a medium or channel of communication, but rather as a source of communication. Developments like generative AI and interfaces like chatbots serve to amplify this perception of “sourceness” because they imbue more agency to the technology. What are the implications of an “intelligent” technology being perceived as a source of communication? How will it affect the processes and effects of journalism and mass communication? Below, we address these questions by invoking the concept of source orientation and describing how the murkiness of sourcing in online media is further complicated by ChatGPT. We then discuss issues involved in source identity and disclosure of AI as the source of communication. We discuss how AI tools persuade us to redefine sacred human concepts like creativity. We share an anecdote of our interactions with ChatGPT, which exposes the weaknesses of prose created by AI writers. Finally, we describe how ChatGPT complicates the affordance of source interactivity and poses a challenge to human agency, and what we can do to increase user agency and co-opt it into our communications.
Orienting to ChatGPT as a Source of Communication
A key concept “source orientation,” which refers to the psychologically salient source of our interaction, emerged around the turn of the century. When users showed social responses to machines in CASA studies, it seemed ludicrous that they would be behaving socially toward computers. Study participants flatly denied that they were showing such responses as politeness and reciprocity to computers. It seemed logical then to assume that computer users were thinking of other humans when they showed social responses in front of the computer. They were probably orienting to the programmer or the person with whom they were communicating at the other end, as often happens with computer-mediated communication (CMC) tools such as e-mail and instant messaging. If this is true, that is, users are indeed thinking of fellow humans when they are interacting with a computer, then their social responses are perfectly reasonable, and do not mean that they are treating machines as social entities.
Sundar and Nass (2000) tested this possibility by proposing a critical test between two models of human–computer interaction (HCI): the Computer-as-Medium (CAM) model, wherein technology is treated as a mere channel for information transmission, and users orient toward the computer programmer or a networker, not the computer itself; and the Computer-As-Source (CAS) model, wherein users directly respond to computers as the source of information, just as they would respond to other human beings as a source (Nass & Moon, 2000; Reeves & Nass, 1996), borne out of a natural tendency to treat technologies that have social characteristics in a social manner (Nass et al., 1994), because we are not evolved enough to distinguish mediated life from real life (Reeves & Nass, 1996). Sundar and Nass (2000) found compelling support for the CAS model. In other words, users respond directly to the computer qua machine as a source. This finding illuminated the fundamental difference between HCI and CMC. It also inspired hundreds of research projects that explored the direct effects of media technology on psychological responses, fueling the growth of separate divisions for studying communication technology in Association for Education in Journalism and Mass Communication (AEJMC) and other academic organizations. The International Communication Association’s Communication and Technology (CAT) division grew to become its largest division, surpassing the Mass Communication division in membership.
Users do not treat all technologies as distinct sources of communication. Technologies must be interactive, use natural language, and fill roles previously held by humans to elicit social responses from users (Sundar & Nass, 2000). These three elements have been present to more or less degrees in various communication technologies over the years but arguably reached a pinnacle with ChatGPT. Large Language Models (LLMs) have existed for a few years and attracted considerable research in the AI community, but only when they were presented to the public in the form of a chat interface did they explode into being. This is because all three predictors of source orientation are present to an unprecedented degree in this new technology. The chat functionality makes it interactive, bringing an almost human level of back-and-forth communication to interactions with a machine. As a significant improvement over previous natural language processing (NLP) models, ChatGPT was first pre-trained based on a vast corpus of human-generated text, and further extensively fine-tuned on specific tasks. ChatGPT is excellent at using natural language, trained to guess the next word, generating highly human-like text, or performing other human language tasks like having a dialogue (Thorp, 2023). Therefore, it serves as a “communication source,” a role typically held by humans.
The last-mentioned characteristic is of particular importance to our discussion because the role of the communication source has a sacred place in our field and has been historically held by humans. Source, the starting point of all traditional models of communication, has special significance in communication and psychology research because the identity of a source is integral to our consideration of communication content. News and information are taken seriously if they are attributed to respectable sources. Entire programs of research in social psychology, starting with Hovland’s seminal work on propaganda during World War II, have focused on the role of source credibility in influencing how we process information and are persuaded by it. In journalism, source is hallowed, both in practice and in research.
In the realm of online news, however, source is a murky concept and can come layered in many forms, obscuring the real origin of a piece of information. In proposing the first typology of online sources, Sundar and Nass (2001) highlighted the distinction between sources that journalists talk about (i.e., individuals supplying the raw materials for a story) and sources that news audiences orient toward, namely, the gatekeepers who deliver the information to them in the digestible format of a newspaper, newscast, or online post. While traditionally, the gatekeepers were human, that role has been increasingly assumed by news recommender systems. These help select and personalize information for individual users at a scale that would be impossible for newsroom editors. News consumers not only appreciate the individualization of information by these systems, but also perceive the content to be better when selected by a computer rather than a news editor, even when the content is identical. In one of the first experiments investigating how individuals evaluate human and machine gatekeepers differently, Sundar and Nass (2001) found that news stories were rated higher in quality when readers were led to believe that a computer selected them rather than a human news editor. The authors argued that this difference could be explained by the different “heuristics” triggered by these two sources.
Machine Heuristics Triggered by ChatGPT
Heuristics are mental shortcuts that help make sense of incoming information. Eagly and Chaiken (1993) describe heuristic processing as a limited mode of information processing, leading to judgments and decisions based on relatively simple and superficial cues that are peripheral to the central content. By default, people follow the principle of least effort as they process information. They tend to engage in heuristic processing, as they convert cognitively demanding tasks to simpler mental activities and make decisions in a cost-effective manner (Fiske & Taylor, 1991). Cognitive heuristics are formed from prior experience and are thought to be stored in memory as simple rules of thumb (Chen & Chaiken, 1999). Under this framework, the source of communication is a superficial cue that triggers a cognitive heuristic, which then shapes the receiver’s perception of content, eschewing the need for effortful examination of content characteristics. And, different sources trigger different heuristics.
Sundar (2008) proposed that when the computer is identified as the source of communication, it triggers the machine heuristic, which is a rule of thumb that machine-like characteristics are reflected in the content transmitted by the computer. The presence on the media interface of any machine cues will cue users to apply the mental shortcut that machines are more accurate and objective compared with humans, thereby shaping their perceptions and evaluations of the communications attributed to that source.
Several studies that compare the reception of human-written news with automated journalism suggest the operation of a machine heuristic in shaping readers’ perceptions. Readers believe that news written by an algorithm is less biased and more objective than news written by humans (Waddell, 2019), even when it contains arguments that are counter to their beliefs (Wischnewski & Kramer, 2022). In other words, simply telling users that the news is written by an AI reduces their tendency to engage in motivated reasoning. Haim and Graefe (2017) revealed that users expect human-written texts to be more readable and have a higher quality compared with AI-generated news; but when presented with an AI-generated news article and a human-written news article side by side, they rated AI-generated news higher in credibility and human-written news higher in readability. Other studies, however, reveal that users tend to evaluate human-written news as more credible than AI-written news, even when the content of the news story is identical (Waddell, 2018, 2019; Zheng et al., 2018). Outside the news context, Jakesch et al. (2019) found that users tend to distrust Airbnb hosts whose profiles are written by AI.
These conflicting findings suggest that machine attributes are not all positive. In addition to the aforementioned positive machine heuristic, online users could also be applying the negative machine heuristic (Sundar, 2020). A negative machine heuristic refers to the general impression that machines are rigid and lack the flexibility to make nuanced subjective judgments like humans (Molina & Sundar, 2022). The distinction between the positive machine heuristic and the negative machine heuristic can help explain the contradictory evidence regarding why users trust machines more than humans in some instances, but trust them less in others. Depending on the context and the task at hand, the presence of AI as a source cue may be more likely to trigger either the positive or the negative machine heuristic, thus influencing users’ perceived credibility of the AI-generated content. Specifically, users will likely over-trust an AI when it performs mechanical tasks (e.g., fact-based queries with ChatGPT or news reporting) and under-trust it when it performs human-like tasks (e.g., writing movie reviews or human-interest stories) (Lee, 2018).
More generally, research in this domain illustrates how the disclosure of the source of the text (as either AI or human) affects users’ perceptions of the content’s credibility. Is this because AI-generated content, by nature, possesses certain attributes that human-generated texts lack? Or is it because of the sheer identity of the source (i.e., source cues on the interface)? A meta-analysis of 12 studies showed no significant difference in readers’ perceptions of credibility across human and machine-written text. Only when the users become aware of the source of the news do they tend to rate the credibility and quality of human-written stories higher (Graefe & Bohlken, 2020). In fact, blind tests generally show that individuals are really bad at differentiating content generated by AI from that produced by humans. For instance, humans, even those with expertise, tend to be inaccurate in identifying machine authorship of texts, especially in creative writing contexts (Guo et al., 2023; Köbis & Mossink, 2021).
This is probably because AI writers like ChatGPT are trained on vast amounts of human-generated text and are fine-tuned to mimic their training data. This raises an important question: Does AI really “create” content in the same way humans do, or does it simply reproduce existing content?
The Question of Creativity and Originality
Researchers have started to question whether notions of creativity and authenticity apply to AI that generates content. By definition, ChatGPT is an application of Generative AI (GenAI), which is a class of machine learning algorithms that can learn from various types of content, including text, images, and audio, to produce novel content (Jebara, 2012). Notably, however, the learning process of GenAI inherently limits its ability to generate truly original concepts or ideas, as it primarily relies on synthesizing pre-existing content. This is a limitation that ChatGPT itself acknowledges (Pavlik, 2023). Therefore, the increasing reliance on AI for creative tasks calls for explicating the concept of creation/creativity. Is creativity really about juxtaposing existing words in new ways to achieve seemingly coherent sentences, or is creativity the expression of new ideas and thoughts that draw upon one’s experiences, personality, and soul?
Whether ChatGPT really exhibits creativity has drawn attention from the popular press. Musicians like Nick Cave say the lyrics generated by ChatGPT are not truly creations but rather simple replication of pre-existing content in a way that does not reflect human experience (Madarang, 2023). In a similar vein, in the human world, we value original works far more than replicas. A painting by Picasso will sell for millions of dollars, whereas an exact reproduction of that Picasso painting by another skilled artist will be valued much less. Even if the painter is not as famous as Picasso, the original work by an artist is valued more than replicas, however perfect. This is because we value originality as a unique creative expression borne out of the artist’s authentic experience (Newman & Bloom, 2012). ChatGPT is more like the skilled imitator than the original painter. Perhaps, the best term to describe ChatGPT is “stochastic parrot”: it generates content based on statistical and probabilistic information, without actual reference to meanings (Bender et al., 2021). Conventional creativity originates from permeable cognitive structures, either in terms of greater flexibility in the intellectual sphere or the affective sphere (Runco & Jaeger, 2012). Neither flexibility nor human emotions are within the domain of expertise of GenAI, whereas both are very much part of human expression, thus suggesting an inevitable linkage between human experience and the notion of creativity. Therefore, we must ask: What is originality? Or, should we be asking: How is human originality different from machine originality?
Originality for LLMs like ChatGPT requires being good at mimicking the human style of writing after learning from millions of pieces of human-generated text. However, it is impossible to guarantee that the writing generated by them means anything. Some computer scientists have dubbed ChatGPT a “bullshit generator,” as it produces seemingly convincing but false information (Narayanan & Kapoor, 2022). This fuels the increasing concerns in industry and academia that ChatGPT might accelerate the spread of misinformation.
The Problem of Bullshit Generation
Following the flood of mainstream media coverage devoted to ChatGPT and widespread concerns that it would be used on a large scale to write student essays and term papers, we conducted a little experiment by assigning an academic task. We asked it to describe the MAIN model by S. Shyam Sundar:
Media characteristics. The physical and semantic properties of the media, such as format, content, and design, which shape how individuals engage with the media. Audience characteristics. The individual differences and personal characteristics of the audience, such as demographics, values, and motivations, which influence how they interpret and respond to the media. Social context. The social and cultural environment in which the media is consumed, including the norms and expectations of the community and the relationships between individuals. Mediated communication. The exchange of information and meaning between the media and the audience through verbal and nonverbal cues. Mediated experience. The emotional, cognitive, and physiological responses of the audience to the media, which can shape their perceptions, attitudes, and behaviors. The MAIN model suggests that these factors interact and influence each other in complex ways, and that the overall effects of media-audience interactions depend on the specific combination of factors present in a given situation. The model also suggests that the way media and audience interact are dynamic, and they are continuously influencing each other.
The MAIN model suggests that the design of digital media platforms afford certain information processing strategies, such as browsing, searching, and filtering, and that users employ cognitive heuristics, such as the availability heuristic, the representativeness heuristic, and the affect heuristic, to navigate and make sense of the information they encounter. In summary, the MAIN model explains how the design of digital media platforms shapes the way people process and make sense of the information they encounter online”
A couple of days later, we asked ChatGPT the same question:
The model posits that the design of digital media platforms, specifically the technological affordances of the platform, shapes the cognitive heuristics that users employ when processing information. Technological affordances are the capabilities and possibilities that a platform provides for users to interact with information, such as browsing, searching, and filtering. Cognitive heuristics are mental shortcuts that people use to simplify decision-making and information processing, such as the availability heuristic, the representativeness heuristic, and the affect heuristic. The MAIN model suggests that the design of digital media platforms affects the way people process information, as different affordances and heuristics can lead to different information processing outcomes. For example, a platform with strong search affordances may lead users to employ the availability heuristic and rely on the first results they find, while a platform with strong social affordances may lead users to employ the affect heuristic and rely on the opinions of others. In conclusion, the MAIN model provides a valuable framework for understanding how the design of digital media platforms shapes the way people process and make sense of information online. It highlights the importance of considering the impact of technological affordances on cognitive heuristics when designing digital media platforms, and provides insights into how these design choices can affect information processing and decision-making.
For the uninitiated, any of these three responses will seem sensible and come across as credible, especially given the forthright manner in which ChatGPT describes the components of the model, complete with a set of bullet points explicating five key elements of the model. However, anyone who has read and understood MAIN model can easily tell that this is bullshit. It is reminiscent of an undergraduate student who invents an answer in response to an exam question about a model that they have not studied, taking a wild guess and expressing it in an erudite fashion. Unless the professor grading the exam is the theorist who proposed the model or has the time and resources to read each student’s answer deeply, they would likely award at least partial credit for such responses that are pulled out of thin air.
While it is impressive that ChatGPT came up with a meaningful set of words to fit the acronym and claimed that MAIN stands for “Media and Audience Interaction Networks,” it is provably wrong. In actuality, MAIN stands for Modality (M), Agency (A), Interactivity (I), and Navigability (N) (Sundar, 2008). When we responded to its explication of the model by saying, “You got that wrong” and gave it a hint (“The MAIN model is about technological affordances and cognitive heuristics”), it quickly apologized and provided a different response that swiftly co-opted two keywords from our hint to come up with a new expansion for the acronym. It now claimed that MAIN stands for “Media Affordances and Heuristics-based Information Processing,” which is also wrong. More bullshit! However, this time, it used our hint and cobbled together a coherent description of the model (“The model posits that the design of digital media platforms, specifically the technological affordances of the platform, shapes the cognitive heuristics that users employ when processing information”) that can be accepted as a fair, even if somewhat broad, response to our prompt. It identified some well-known heuristics in psychology, such as the availability heuristic and the representativeness heuristic. Although these heuristics are not part of the MAIN model, they serve to make the response seem more “learned” and somewhat rooted in scholarly discourse. We refrained from providing any feedback on this response but went back to ChatGPT after a few days and asked again about the model. This time, it provided a longer and seemingly more sophisticated overall answer, but continued to say, “The acronym MAIN stands for Media Affordances and Heuristics-based Information Processing.” In this way, ChatGPT created its own version of the MAIN model written in a manner that rings true. It continued to come up with gibberish after we indicated that its previous answer was wrong, by weaving in parts of our prompt into its subsequent response and incorporating other words that often co-occur with words in our revised prompt. Yet, it also steered closer to a correct answer by using keywords we hinted at.
By now, it should be clear that ChatGPT is simply stringing words together based on their co-occurrence in the vast corpus of human-produced text that it is trained upon. This is the reason why the end result is almost, but not quite, meaningful. Although ChatGPT is good at mimicking human writing style, it lacks a real understanding of the object or phenomenon that it is describing. The same is true with news written by AI writers. These writers lack journalistic principles or routines and can only mimic the style of journalistic prose. Unlike human journalists, AI writers cannot possibly interview original sources, but only look up existing information that could be considered original sourcing (like sports scores and financial market behavior of the day), leaving plenty of room for creating misleading information based on statistical co-occurrence of words in history. Given that news, by definition, is based on unusual occurrences, cobbling together words from prior occurrences is likely to be more than ordinarily problematic in the context of journalistic prose.
This is particularly concerning because the descriptions provided by AI writers appear professional and convincing, which may easily deceive users and students who have little knowledge of the subject. By stringing together words to produce coherent sentences, these writers lend an aura of credibility, positioning them especially well for spouting what some researchers call “pseudo-profound bullshit” (Pennycook et al., 2015). That is, although it retains the syntactic structure and buzzwords that seem to convey profound meanings, when we look closely, the content is, in fact, nonsense. Research has shown that users who are more likely to believe pseudo-profound bullshit as profound (i.e., those who have higher “bullshit receptivity”) are also more likely to fall for and believe in misinformation and rate it as more credible (Pennycook & Rand, 2020). This could be due to a halo effect, a human tendency to over-generalize and form a positive judgment about an entity based on a few notable positive impressions (Thorndike, 1920). Specifically, writing in a professional style helps ChatGPT create a good impression about other aspects of the content it generates, such as its accuracy. Users who are prone to rely on autonomous, intuitive thinking might be especially susceptible to the halo effect than those who are more deliberate and analytical in their cognitive orientation (Pennycook & Rand, 2020).
Our experiment with ChatGPT showcases the danger of relying solely on AI-generated content, which can propagate false information in a seemingly trustworthy manner. Therefore, AI writers should prominently disclose that the information is AI-generated, so that users can be more cautious and critical in evaluating the credibility of information and possibly verify its accuracy by seeking corroboration in search engines and other sources that they trust.
As we discussed earlier, humans are really bad at distinguishing human versus AI generated text. To address the potential risks associated with this inability of readers to detect the difference, scholars advocate more research on distinguishing AI-generated text from human-generated text (Ma et al., 2023). Apart from human verification, researchers have also strived to develop AI detection tools, such as the newly launched AI classifier by OpenAI. 2 However, these AI classifiers are largely unreliable, particularly with short texts and when AI-generated texts are co-edited by humans. Without proper explanation, users may not be fully aware of the limitations of these systems and may rely on the positive machine heuristic, again to trust the AI classifier’s judgment regarding the source of information (Molina & Sundar, 2022). To promote critical evaluation of the information, we should provide users with enough explanations that help them understand the strengths and limitations of AI detection tools in addition to those of AI generation tools.
That said, the disclosure of the source of information can be a double-edged sword. On the one hand, it is essential for maintaining transparency and providing users with sufficient information about the originator of the information. On the other hand, there is a risk that users may rely too heavily on cognitive heuristics and make snap judgments based solely on the source cues rather than critically evaluating the content of the information. Therefore, in addition to the ongoing research on detecting the source of information, promoting more systematic evaluations of AI-generated content is crucial. This could provide rich opportunities for future research to ensure that users critically assess the information they encounter instead of blindly following the source cues they encounter. One way to do that is to provide more user agency, to allow users to engage actively with the system and its output.
Importance of Affording Actions to Users
In addition to cues, technological affordances of media interfaces could also enable unique user actions that promote engagement with the media content in a way that would help ascertain its veracity. Affordances refer to possible actions that visual stimuli on the interface suggest (Gibson, 1977). According to the theory of interactive media effects (TIME) (Sundar et al., 2015), technological affordances affect users’ trust through two distinct routes, namely the cue route and the action route. The sheer existence of certain affordances (i.e., cues on the interface that signal the presence of the affordance) and the auto-generated metrics that they sometimes display (e.g., number of likes) can affect a media user’s perceptions without the user taking specific actions. The cues trigger mental shortcuts or cognitive heuristics about the nature of the system, which affect users’ trust and evaluations of the system. As we described earlier, in the case of ChatGPT, the presence of an AI could affect users psychologically via the cue route by triggering source-related heuristics, such as the machine heuristic and authority heuristic (Sundar, 2008). But, ChatGPT can also affect users via the action route. Given its interactive interface, ChatGPT could engage users by having ongoing conversations with users, and in doing so, afford users opportunities to question ChatGPT and verify to their satisfaction whether the content is credible or not. One of the major affordances associated with the action route of TIME is “source interactivity,” which refers to the degree to which the system lets the user serve as the source of communication instead of passively consuming information delivered by the system (Sundar et al., 2015).
Sundar et al. (2012) theorized source interactivity as a continuum of self-expression available to users through various affordances, from customization to curation to creation of content. The lower end of the continuum involves customization, where users can actively browse, select, and organize content to create a unique and personal experience for themselves. Curation falls in the middle of this continuum, where users can manage and select information for others, such as forwarding or sharing a piece of information on social media. Creation represents the highest level of source interactivity, where users can not only manage and organize content, but also create and disseminate content on their own (e.g., TikTok videos, tweets).
While research has revealed the benefits of affording source interactivity to enhance user agency and promote engagement with modern interactive news media (Sundar et al., 2022), the increasing incorporation of AI technologies, such as proactive personalization, has served to increase machine agency, often by undermining users’ sense of agency (Sundar, 2020). ChatGPT is no exception. While the open dialogue box affords customization possibilities whereby users can write and fine-tune their prompts to elicit specific responses from the model, the curation and creation possibilities are taken entirely out of users’ hands, in stark contrast to the norms of current-day social media technologies. In fact, ChatGPT has taken over source interactivity by creating content from scratch, a task that hitherto remained clearly in the human domain. This ability could potentially create a tension between machine agency and human agency (Sundar, 2020)—while users may value the convenience of getting their prose composed by GPT, they may resent the loss of control over the many decisions involved in the content creation process. If indeed users spot errors or unsatisfactory responses, the only thing they can do is fine-tune their prompts to generate a potentially different response from the model, just as we did. However, users will never have control over the next answer that ChatGPT will deliver in response to their revised prompt. The output can still remain unsatisfactory, as was the case with us; but this is the only action available to users at this point.
Going forward, we communication researchers should help discover best practices for prompting AI writing tools to produce what we want. Several practitioners in advertising and public relations have already integrated a variety of such tools into their work. We ought to establish clear guidelines for creating prompts that are likely to produce credible and helpful outcomes from AI writers like ChatGPT. At the same time, we should also be in dialogue with designers of AI writing systems to explore new opportunities for empowering users and providing them with greater agency throughout this process. For example, ChatGPT allows only simple user feedback, such as upvoting or downvoting a specific response from it. More nuanced types of source-interactivity affordances could be added to allow users to help the system go through reinforcement learning more effectively. One simple way is to establish a mechanism where users can search the information provided by ChatGPT, such as providing links to information sources, so that users can verify the information for themselves. Already, major efforts are underway to integrate GPT into popular search engines. In addition, a verification button can be added to the system, so that individual users can be actively engaged in teaching the system whether a specific statement is factual or completely fabricated. By affording these kinds of verifiability and searchability, we can not only cultivate the mind-set of training and breeding AI among users (so that they would not blindly accept what the system generated), but also foster better human–AI collaboration for improving the effectiveness of the ongoing training process of LLMs such as ChatGPT.
Concluding Remarks
The popularity of ChatGPT has elicited polarized responses among academic researchers and educators. Some consider it a convenient tool for delegating mundane tasks. Others regard it as a serious threat to human agency and authenticity, and even a potential super-spreader of misinformation (Narayanan & Kapoor, 2022). As communication scholars, it is crucial for us to understand how LLMs are utilized by users and how they affect users’ perceptions of online information. While ChatGPT might still be considered a novel tool, the concept of AI writers, or machines as communication sources, has been studied quite extensively in the communication literature. By drawing on decades of research regarding users’ responses to machine sources, and how this source attribution influences information credibility, we can gain insight into how users may react to AI writers.
This essay addressed several challenges that arise from the widespread use of GenAI tools, such as ChatGPT. Nevertheless, we also believe that these challenges present novel research opportunities for communication scholars. These opportunities can be usefully informed by findings on the psychological effects of new media technology and help us explore the new challenges introduced by more intelligent and agentic machines. We call upon researchers to study the emerging phenomena of AI-generated media content in a conceptually rich way, governed by theories about psychological aspects of new communication technologies, and develop innovative ideas for promoting ethical design and socially responsible uses of GenAI.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
