Abstract
In this essay, we argue that the applications of generative-AI technologies to science communication need careful consideration to ensure such uses are desirable, and socially and ethically acceptable. In early applications of GenAI in science communication, especially in public media, there has been swift and overwhelmingly negative response to news about its use. Drawing on existing literature about generative-AI in adjacent fields to science communication, and on the scholarship on the ethics of science communication, this article maps out the key ethical issues that the use of generative-AI technologies raise for science communication. Specifically, acknowledging that generative-AI is more than an output-producing technology but is a constellation of governance, infrastructure, data, human and computing operating systems, we argue that three dimensions of ethical concerns need to be explored: the communication outputs of generative AI; the social and environmental impacts of using generative AI technologies in science communication and the narratives we tell about AI technology.
Introduction
Generative AI (GenAI) technologies, which can be used to generate text, images, video and other creative content, are emerging as potentially accessible ways for making science public. However, while such technologies offer opportunities and possibilities for science communication, they raise important ethical issues. Concerns over GenAI technologies range from industry transformations (deskilling knowledge workers, reducing employment opportunities for qualified professionals and so on), to concerns about the role of these technologies in exacerbating human disconnection and concerns about misinformation and disinformation (Woodruff et al., 2024). For example, in August 2024, the Australian Broadcasting Corporation revealed that Cosmos (a popular Australian science magazine) was publishing articles created using generative-AI technology, drawing criticism from its contributors and former editors (Purtill, 2024ons for making the case for the effectiveness of generative-AI tools in combatting scientific misinformation and conspiratorial beliefs (Nabavi et al., 2025). These articles highlight the substantial yet contested role that generative-AI does, could and will play in science communication, from science popularisation to making science accessible for target audience.
Beyond the concerns frequently discussed about the ethics of GenAI technology development in general (see, e.g., Choung et al., 2023; Hagendorff, 2020), the ethics of GenAI technologies in science communication more specifically are still under-explored. Indeed, the editorial to a recent special issue on ‘Science Communication in the Age of Artificial Intelligence’ supports the idea that ethical considerations are a reoccurring theme (Kessler et al., 2025). Specifically in cases where GenAI technologies have been used by university communication departments over the past year, researchers have observed ‘a shift from technical to practical and ethical considerations in AI adoption’ (Henke, 2025: 12). Beyond questions of fair use and the potential for biases in generated content, however, the ethical tensions of GenAI tool use in science communication more broadly remain unexplored.
Such ethical tensions are reflected in both discussions of GenAI technology ethics, more broadly, and the ethics of science communication, more specifically. A review of 22 major ethical guidelines for AI use reports that accountability, privacy and fairness are the most common considerations, appearing in about 80% of guidelines, with other major issues being transparency, safety/cybersecurity and common/public good (Hagendorff, 2020). Hagendorff notes that aside from common/public good, these ‘are most easily operationalised mathematically and thus tend to be implemented in terms of technical solutions’ (p. 103). Turning to the ethics of science communication, this burgeoning field has, so far, only one framework, which is based on the principles of accuracy, utility, timing and generosity/ fairness (Medvecky and Leach, 2019). While there is overlap between the two ethical frameworks (fairness common/public good usefulness, and transparency and accuracy), these frameworks and guidelines are based on distinct underlying contextual ethical approaches. Whereas ethical guidelines for AI technologies privilege a rules-based, classically deontological approach (Hagendorff, 2020), based on moral imperatives to act according to a prescribed set of rules, the ethics framework for science communication privileges situation-specific, reflection-based approaches closer to principlism (Beauchamp and DeGrazia, 2004) and virtue ethics (Slote, 2010).
With this overlap in ethical frameworks, and given the complex contexts of science communication, it is critical for science communicators to openly engage with the ethics of using GenAI technologies, not as one single technology, but as a full system of integrated technologies, infrastructures, data, humans and cultures. In the same way that science communication highlights the practice of science as reflective of human history and experience (see, e.g., Montgomery, 1996) built by socio-cultural and political environments, the discussion and use of GenAI technologies requires critical perspectives and considerations of the broad human context, from technology development, to data use, to governance, regulation, climate impacts of the technology and beyond. In this essay, we begin to explore ethical considerations associated with the complex integration of GenAI technologies into science communication workflows. We follow Metcalfe (2019) and take a broad view of science communication ranging from media to dialogue and from ‘anyone who communicates science, whether they be scientists or professional science communication practitioners’. (Metcalfe, 2019: 383) We focus on three areas, each relating to a separate dimension of ethical concern that we argue are the key concerns for the science communication field: the communication outputs of GenAI technologies; the social and environmental impacts of using GenAI technologies in science communication and the narratives circulating about GenAI technologies. We reflect on these areas as practice-informed science communication researchers with expertise in ethics, cybernetics and digital media studies.
Ethics #1: Outputs of generative AI
The first tranche of ethical issues associated with the use of GenAI technologies for science communication concerns the use of these technologies to generate outputs. What do GenAI technologies do, what content can they generate and how can science communicators go about generating such content? We respond to these considerations in relation to accuracy, ownership and copyrights.
Science, whatever else it might be, is the source of some of society’s most reliable and accurate knowledge. A commonly held view or traditional imaginary of science is that it is guided by or towards truth – as science progresses, it becomes increasingly close to ‘the truth’ (Psillos, 2005). GenAI technologies, in contrast, have no commitment to truth; when it comes to large language models (technologies that power text generative AI, herein LLMs), the commitment is to semantic probabilities (Sison et al., 2024). As Emsley (2023) explains, ‘This allows informed guesses, with bits of false information being mixed with factual information’ (p. 1).
Discussions on the use of GenAI chatbots and LLMs in science communication parallels long-standing ethical debates about the use of narratives in science communication: how far from the facts science communicators can stray, when storifying science, and under what conditions (Medvecky and Leach, 2019). The persuasive capabilities of GenAI chatbots and content generation technologies raise questions about the purposes for using these technologies. As Sison et al. (2024) state, ‘Because ChatGPT produces highly coherent, natural-sounding, and human-like responses, users find them convincing and readily trust them, even if inaccurate’ (p. 4855). Similar questions issues have been raised about the persuasive nature of narratives (Dahlstrom and Ho, 2012) and the role that science communicators imagine knowledge playing in society (e.g. is science communicated for strategic reasons, democratic, citizen-serving reasons or some other reason, especially when topics are contentious or controversial? (Priest, 2018)). Overall, we argue that these concerns around accuracy and persuasion are connected to transparency: transparency associated with the GenAI model training data, transparency associated with response sources and concerns with communicator transparency (and disclosure) regarding uses of GenAI technologies for content generation.
First, it is important to consider what data used to train the models behind GenAI technologies, including what datasets have powered the model, and how this data may impact any responses or content generated (Hacker, 2021). Science communication in recent years has seen a surge in research dedicated to diversity initiatives within the field, from classic ‘draw a scientist’ exercises with children (see Miller et al., 2018), through to empirical work exploring inclusive science communication (Judd and McKinnon, 2021). In this context, historical representations of science and engineering as a predominantly Western and male vocation are likely to be embedded in AI content generation engines (Kotek et al., 2023). This means that content produced by GenAI technologies for science communication may inadvertently preference legacy norms that this community is actively critiquing. It is important to note that these biases may appear in subtle ways, from using language such as ‘both genders’ (Kamath et al., 2024: 230), which reinforces gender binaries, or reinforcing ableist social norms. Critically reflecting on the embedded norms that may or may not be present in AI-generated content is central to new ways of practising science communication.
Second, the sources of data used to produce AI-generated content are often opaque. This means that many GenAI systems are considered ‘black box’ technologies, where specific outputs are ‘typically non-auditable and, often, non-replicable’ (Schlagwein and Willcocks, 2023: 235, see also Von Eschenbach, 2021). Model upgrades make it difficult for researchers to assess the accuracy of responses that users experience in cases of typical science communication use or to the assess the potential for ‘wrongness at scale’ (Schäfer, 2023: 5, citing Ulken, 2022). ChatGPT has been found to commonly invent references (Buriak et al., 2023), for example. Tools using Retrieval-Augmented Generation (RAG) technology hold potential for improving response accuracy, as these tools can retrieve content from domain-specific data sources and then use an LLM to generate user responses (Shan, 2024). 1 Researchers argue for the future potential of RAG systems or tools to support journalists covering science (Nishal et al., 2024), to automatically fact check climate claims (Leippold et al., 2025) and to detect and explain health rumours (Chen et al., 2024). However, the technology is still in its infancy for science communication-related applications.
The ‘black box’ nature of GenAI technologies also means that there is no easy way for rights holders to clarify the conditions around the use of their materials (see, e.g., Al-Busaidi et al., 2024). As well as being vigilant about uncritically using automatic summaries and AI-generated responses to science content, science communicators must call for more transparency around what LLMs and model upgrades are trained on and where science communicators’ own content might be going. In particular, science communicators need to be aware of the risks associated with appropriating generated content that violates copyright, including plagiarism, and take steps to mitigate the risks associated with GenAI technology use in these circumstances. If it is not possible to determine the origin of content or to gain permission for its use, then using a different tool with more transparency around the training data used and/or relying on alternative sources for materials, such as public domain works or creative commons licenced content may seem more appropriate.
Relatedly, there are issues around science communicators’ disclosure of GenAI tool use in content generation. A growing number of studies are finding limited or mixed audience trust in AI-generated science content (Schäfer et al., 2024). Unsurprisingly, those seeking science-related information using GenAI tools report higher levels of trust in the technologies, compared with those who do not use GenAI tools (Greussing et al., 2025). But seeking out information using GenAI technologies cannot be equated with unknowingly receiving AI-generated messages; disclosing that content has been AI-generated can decrease users’ preference for messages (Lim and Schmälzle, 2024) or reduce perceptions of message credibility (Lermann Henestrosa and Kimmerle, 2024). Indeed, the European Union in their 2025 rollout of the ‘AI Act’ (EU AI Act, 2024) are moving to regulate mandatory disclosure of AI use for end users, by developers and deployers, especially in the cases of higher risk AI systems such as deepfakes and chatbots.
While science communicators may feel that the content they produce will be deemed less engaging or less credible if GenAI use is disclosed, all relevant parties need to be informed of its use, and content creators need to be transparent about the originality of their work. This must include disclosing the use of GenAI tools as this also embeds a practice of acknowledging the potential risks associated with the generated content, as an expected standard. Science communicators can also take steps to ensure that others are not substituting or positioning AI-generated content as original research by supporting GenAI disclosure policies, advocating for the value of human research expertise and engaging in science communication-related watchdog activities. In practice, this might mean carefully considering: (1) how much we should rely on outputs being correct (even when quality control processes are in place to confirm the veracity of content), (2) the implications of using content that is likely to be more persuasive for audiences and (3) how the use of GenAI tools is disclosed to end users, be they readers of a news article or a participant in a deliberative workshop.
Ethics #2: Social and environmental impacts of GenAI use
The second tranche of ethical issues focusses on questions about the social and environmental impacts of GenAI technologies. First, we look to scholars who have investigated applications for GenAI technologies in adjacent knowledge industries, including journalism, academia and academic publishing, and then explore related ethical concerns that have direct implications for the field of science communication.
In the journalism sector, GenAI technologies have been adopted by media owners on promises of expanding audience engagement, sometimes with problematic outcomes. For example, Apple recently suspended its AI-generated media notification feature following several high-profile complaints, admitting that the service was sending inaccurate and misleading headlines and summary alerts to users (see, e.g., Fraser, 2024). More recently, the Los Angeles Times withdrew its new ‘multiple perspectives’ AI-enabled feature after it supplied problematic content to readers (Betts, 2025). The new feature had no media room editorial staff oversight, raising concerns about inappropriate AI technology adoption and organisational decision-making across multiple production areas. It is possible that science communicators will likely experience similar problematic situations in future; organisation-level decisions made to automate communication tasks previously undertaken by professional communicators highlight the importance of ethical GenAI governance and oversight in organisations where science communicators work.
Science watchdog journalism is increasingly reporting on the uncritical and problematic use of GenAI technologies in scholarly communication and academic publishing. Retraction Watch (n.d.) reports a growing number of papers and peer reviews with evidence of inappropriate and unacknowledged LLM use, drawing on a method developed by Guillaume Cabanac (2024). Meaningless AI-generated images have been identified in academic publications (Pearson, 2024), highlighting gaps in quality assurance oversight. In mid-2024, the academic publisher Wiley shut down 19 of its academic journals, due to problematic use of AI-generated content in manuscripts and the identification of manipulated images (Claburn, 2024). Abalkina et al. (2025) argue that the new suite of GenAI tools devoted to outsourcing research writing are a renewed response to the problematic ‘publish or perish’ research performance evaluation systems that have long encouraged the use of paper mills. Each of these examples highlights the numerous key roles that science communicators (such as journalists) as well as others in the academic community play in advocating for more ethical and responsible use of GenAI tools in research communication and supporting an adaptive, open and ethically aligned research publishing system.
Finally, in considering GenAI technologies as a full system or supply chain, we must also consider the environmental impacts associated with the development of these technologies and their ongoing use. All AI systems have a physical presence, largely in the form of supercomputers for training new models and data centres for maintaining their use. These systems inevitably require mining of rare earth metals, and power resourcing at all stages, as well as significant water consumption to cool and enable computing resources of ‘the cloud’ (Monserrate, 2022; Monstadt and Saltzman, 2025). There are some estimates (at the time of writing) that using ChatGPT to create one email of 100 words consumes approximately 0.5 litres of water (Verma and Tan, 2024). As models and computing infrastructure improve in efficiency, this water use is estimated to decrease on a per-prompt and per-user basis (Willison, 2024). However, with increased demand, physical infrastructure to support GenAI tools is continuing to boom, and with it the substantial impacts on electricity grids and power consumption around the world (Crawford, 2024).
There has been increasing attention to consumer ethics and supply chains in other resource-intensive systems, for example, in fast fashion (see Horton et al., 2022) and dietary behaviours (see Judge et al., 2022). However, AI systems currently lack this broader visibility. This poses an increasingly ethical quandary for science communicators engaging with environmental topics: ‘AI for climate’ remains an oxymoronic grey area. Thinking of AI and its broader implications suggest that we need to ensure GenAI tool use comes with human oversight, at both an individual and organisational level, and that before using AI for any task, some consideration is given to the potential costs and benefits (i.e. is this GenAI query or prompt really worth the literal energy).
Ethics #3: Narratives communicated about generative AI
The third tranche of ethical issues in science communication concerns technological narratives about GenAI. Specifically, this section explores concepts of hype, anthropomorphism and techno-optimism.
First, hype has long been regarded as a fundamental part of the innovation process, most obviously as it is imagined in the Gartner Hype Cycle (Dedehayir and Steinert, 2016). This standard model presents hype as an evolution over time from the innovation trigger to the peak of inflated expectation to the trough of disillusionment, then more slowly, rising to a steady plateau of productivity. Although there is debate about the empirical accuracy of this cycle (see, e.g., Steinert and Leifer, 2010), the critical idea here is that hype seems inextricably intertwined with science, technology and innovation. From promises of ‘breakthroughs’ to celebrity endorsements, the ethics of hype is a long running concern for science communication (Master and Resnik, 2013). Hype, often defined in terms of exaggerated claims or hyperbole (Caulfield and Condit, 2012), can be viewed both positively and negatively in relation to science and technology; both the benefits or the risks of a given technology can be exaggerated (Bubela, 2006). Such exaggerations challenge well-established norms and expectations of truthfulness and accuracy associated with science, and GenAI technologies have indeed been subjected to both positive and negative hype (Shaban-Nejad et al., 2023).
More recent scholarship has invited a reimagination of the role and ethics of hype. Roberson (2020) suggests that hype is an important and perhaps necessary step for the democratisation of imagined futures: ‘Hype is the invitation that opens up a dialogue for response, new framings, and the contribution of additional knowledges to the design and re-design of science and technology futures’ (p. 549). This raises significant questions for communicators of science and technology, including the extent to which we, as a field, contribute to some form of hype about AI: what are the ethical implications of this hype, and what kinds of conversations do we enable or disable through hyping or restraining ourselves from hype?
Second, it is well recognised that evoking human-like emotions and traits is an impactful strategy for telling a compelling story, and it may not be surprising that anthropomorphism has become a consistent and engaging tool in science writing over recent decades (McGellin et al., 2021). In addition, using anthropomorphism to communicate about digital technologies is one of several approaches used for building trust between users and AI-enabled systems (see Li and Suh, 2022). This can and does prove useful in specific circumstances, such as those associated with encouraging users to engage with social or medical support technologies (Cohn et al., 2024). However, while increasingly used in common parlance in science communication, GenAI technologies do not ‘think’, ‘make a decision’ or ‘hallucinate’ in ways analogous to human cognition or neural processing (Watson, 2019), nor are these technologies inherently and humanly gendered (especially as ‘female’, as in the case of legacy chatbots such as ELIZA, or the development of Siri, Alexa and Cortana; see Costa and Ribas, 2019).
Science communicators, and AI-field specialists (see Salles et al., 2020), need to consider the ethical implications of using increasingly normative anthropomorphic language to describe GenAI technologies. While seemingly innocuous, such framing may inadvertently sway readers to more readily adopt seemingly convincing instances of AI-generated health, finance or life advice (Cohn et al., 2024) and re-enforce technically inaccurate mental models of how AI systems work (given that these human-like descriptions of technologies solidify in popular science writing).
These reflections culminate in an opportunity for science communicators to avoid replicating historical trajectories that have favoured techno-centric explorations of complex socio-environmental systems, such as early climate change communication, and instead engage with the full breadth of AI technologies and their place in society. A reliance on hype or inaccurate analogies would be a missed opportunity for nuance, to the detriment of our field and audiences.
Conclusion
The integration of GenAI technologies into science communication processes and practices appears inevitable, foreshadowing the inevitability of ethical concerns that these technologies evoke. We have suggested three tranches of ethical concerns that need to be addressed: those associated with the communication outputs of GenAI technologies; the social and environmental impacts of using GenAI technologies and the narratives we communicate about GenAI technologies.
In exploring the potential of GenAI technologies for science communication, those in the field are encouraged to consider the full system of knowledge, constructed by political, economic, social and environmental factors, and critique it as a nuanced and human endeavour, just as they would any other socio-technological development. GenAI technologies are not standalone technologies, but constellations of governance, infrastructure, data, human and computing systems. In addition, members of the science communication community need to recognise that they themselves are a key part of the system of GenAI technologies, both in how they use these technologies and in how they communicate about it. By acknowledging the critical role that communicators play in the socialisation and normalisation of new technologies, we propose that every member of our field consider their own ethical position on the use of and engagement with GenAI tools.
As noted in the introduction, this is only the beginning; a first pass at mapping out some of these ethical tensions to prompt some deeper reflection on the ethics of using GenAI technologies for science communication. As AI technologies and their use develop swiftly, so too will our engagement with their ethical implications. There are also opportunities that arise, some of which we might think of as equally ethically important, such as the capacity for AI to act as a critical friend, or to help us think through and carry out the ‘ethics work’ (Banks, 2016). Returning to the review of ethical guidelines for AI by Hagendorff (2020), ‘A transition is required from a more deontologically oriented, action-restricting ethic based on universal abidance of principles and rules, to a situation-sensitive ethical approach’ (p.114). This aligns with existing scholarly views on science communication ethics that propose principles inviting ‘reflection about acting-in-context’. (Medvecky and Leach, 2019: 87). Here, acting ethically is fundamentally about ‘a practice of self-reflection’ (Keohane et al., 2014: 361) on possible actions and practices for the field, and the points of tension that emerge. Such reflection demands that we acknowledge the uncertainties, power imbalances and epistemological shifts that these technologies provoke. As science communication is reshaped by AI-generated content, those in the field are called on to adapt and intervene – critically, creatively and ethically.
Footnotes
Acknowledgements
Thank you to Lorenn Ruster for their comments on an earlier version of this manuscript.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
