Abstract
Climate change has long been difficult to visualize, contributing to climate inaction. Critical visual methods are used to analyze the social constructions of climate change encoded within leading generative A.I. chatbot text-to-image large language models: OpenAI's DALL·E 3 and Google Gemini's Imagen 2 and Imagen 3. Synthetic data for two types of generative A.I. climate change imagery are examined: (1) still images generated using generic climate change prompts and (2) images generated about heat wave impacts on people. Findings show that polar bears are a consistent visual metaphor for the climate crisis in images created with DALL·E 3 and that the model distorts climate change extreme heat risks. Google Gemini's Imagen models generated more photorealistic climate visuals somewhat grounded in climate science with greater safeguards built-in for the generation of humanoid figures and depictions of human suffering. As this research shows, generative A.I. visual outputs are reflective of the biases actively encoded into text-to-image models through data training sets and programming decisions. It is argued that chatbot image generator models distort the climate crisis in public imaginations by replicating pre-existing visual (mis-)representations of climate risks.
Introduction
The rapid adoption of generative artificial intelligence has opened a new frontier for the application of large language models (LLMs) in society. On the positive side, machine learning applications are being used to reduce greenhouse gas emissions (Kaack et al., 2022). Yet, these technologies are not without controversy for contributing to the climate crisis by spiking energy and water demands to power data centers (Copley, 2025; IEA, 2025). In terms of cultural impact, generative A.I. applications have the potential to “fundamentally transform” human-to-human communication about climate change (Schäfer, 2025). Generative A.I. chatbots are based on LLMs trained on sets of textual, and other data, that then use algorithms to predict the next word in a sequence. In the case of images, these models produce new renderings based on training data and coding decisions. Generative A.I. text-to-image models could amplify distorted visuals discourses about climate change (Muncie, 2025). Furthermore, leading ethical A.I. experts have called out big tech companies for perpetuating racial and gender bias within their products (Bender & Hanna, 2025). (Potential) biases and distorted representations within genAI text-to-image models outputs about climate changes are an under-studied area of digital climate communications (Schäfer, 2025). Thus, it is an apt moment to explore the implications of genAI for visualizing climate change causes and impacts.
Climate change has become a “defining symbol” of humans’ relationship to our world (Boykoff, 2011). In a political and cultural era marked by polycrisis, previously unthinkable environmental disasters—in terms of scale, duration, and rapid intensification—have come to pass around the world, the “great derangement” in the words of novelist Amitav Ghosh (2016). The harms associated with climate change transcend national boundaries and bring latent, irreversible consequences as a result of human activity, chiefly the continued burning of fossil fuels that is driving planetary warming. How those harms are culturally represented (or not) in mainstream media (Schäfer & Painter, 2021), social media (Pearce et al., 2019), artistic works (Nurmis, 2016), and now generative A.I.-produced content (LC & Tang, 2023; Muncie, 2025), is an active area of inquiry.
Applying critical visual research methods (Rose, 2022), in this essay I examine two types of generative A.I. climate change imagery synthetic data: (1) still images generated using generic climate change prompts and (2) images generated about heat wave impacts on “people.” I include data created with two leading publicly available A.I. text-to-image models, OpenAI's DALL·E 3 along with Google's Imagen 2 and Imagen 3. This work advances a scholarly understanding of genAI's cultural impact through exploration of leading text-to-image models’ visual depictions of the climate crisis.
Seeing the Climate Crisis, when Generative A.I. Challenges Believing
In this research I take a constructionist approach, grounded in the idea that textual and visual content function within ongoing cultural processes to help individuals and groups co-construct shared meanings about the world around them (Gamson, 1988). Climate change visuals are the products of specific, time-bounded cultural and political processes. Media and technology need to be understood as parallel systems that are actively “reproducing the dominant political culture” (Gamson, 1988, p. 165). Within these aesthetic and techno-bureaucratic processes visuals function as “condensing symbols” to drill complex environmental problems down to core themes or frames (Gamson & Stuart, 1992) that give coherence to an issue (Gamson, 1988). As such, the underlying visual outputs are not neutral. Rather images function as both cultural and political artifacts (Hall, 1980). They can serve as “instruments of power” in shaping social meanings and shared understandings (Rodriguez & Dimitrova, 2011).
Photography, even in the digital age, has been predicated on the notion that “seeing is believing” (Doyle, 2007). It has been consistently hard to depict climate change visually in a way that connects abstract earth systems processes to real-world impacts. Early visual representations of climate change depicted a distant far-off threat, one removed from everyday life (O’Neill, 2013). Within the field of photojournalism since the 1990s—back when global warming threats were less directly visible as here and now threats to humanity—polar bears have served as a visual metaphor for climate change. As a “visual metonym” polar bears thus stand in for climate change in “haunting” the collective public imaginations of our warming world (O'Neill, 2022).
Traditionally-multimodal visuals, meaning for example those that appear alongside textual content in a digital magazine like the storied National Geographic, have functioned as “co-constructors” of environmental narratives (DiFrancesco & Young, 2011), with a reciprocalness between an image and its accompanying text (Wozniak et al., 2015). Images may not match words. In the case of extreme heat, visuals included with news reporting often feature a “fun in the sun” theme of people enjoying a sunny summer day, such as sunbathers (O’Neill et al., 2023). That heat equates to enjoyment and leisure is not a prominent framing device within the actual text of heat wave news stories (Hopke & Wozniak, 2025). Similarly, a disconnect between visuals of climate impacts (e.g., heat distress, extreme storms, flooding) and causes is a key challenge for effective climate communications. How these cultural constructions translate to genAI images is unknown.
I apply compositional and discursive methods to analyze the content, context, and ideology of climate genAI images (Rodriguez & Dimitrova, 2011). Following Guenther and colleagues, visual representations of climate futures are defined as a projected scenario at a future set point, or vague reference to future(s) climate impacts which are “often hypothetical, include a path description, and emphasize elements of a possible future,” (Guenther et al., 2023, p. 3). I ask:
RQ1: What do generative A.I. text-to-image models reflect back about humanity's present, and possible futures, in climate change imagery using generic climate change prompts?
Representational Harms and Biases Encoded into GenAI Algorithms
Generative A.I. chatbots are not neutral technical systems. Rather they are actively encoded by their designers and the big tech companies that own them with specific technical affordances and safeguards, or a lack thereof (Muncie, 2025). For example, whether or not an end-user is able to use a specific text-to-image or video GenAI model to generate likenesses of famous individuals is an active design choice. Generative A.I. software as technical systems “cannot exist in isolation” (Muncie, 2025, p. 3). AI text-to-image models further do not operate independently and cannot reason as humans do. Critical data scholars have called for greater scholarly attention to AI systems as forms of an “extractive data economy” and to decolonialize theorizing on global AI digital cultures with greater attention to the experiences of users outside in Western majority contexts (Chateau et al., 2025, p. 1015).
Critical digital scholars have further led calls to deepen theorizing on genAI chatbots as the products of active and intentional geopolitical, economic, social, and cultural influences which can be encoded into algorithms as biases (Bender et al., 2021), what Tacheva and Ramasubramanian (2023) term as “AI empire.” LLMs can pick up, or be encoded with, the “hegemonic worldview” (regardless of intent) of their training datasets and the decisions that go into data classification (Bender et al., 2021, p. 617). This is what Kotliar (2020) terms “data orientalism,” or the algorithmic programming of the non-Western “Other” into categories defined through problematic universalist assumptions (pp. 934–935). GenAI tools are already documented as reproducing preexisting “representational harms” by amplifying misrepresentations of non-Western cultures and identities (Ghosh et al., 2024., p. 467). Gender, racial, cultural, and geographical biases and stereotypes are encoded within leading genAI text-to-image models (Ghosh et al., 2025; Omena et al., 2024), including within genAI images created as part of a Greenpeace International climate futures social media campaign (Muncie, 2025). The biases encoded into models through training sets and data classification can be amplified through generative AI chatbots causing real world harms (Bender et al., 2021).
Thus, secondly I ask:
RQ2: What biases are encoded into generative A.I. climate change imagery?
Methods
Generative A.I. chatbots can be used in social science research as both instruments for digitally-native inquiry or as the objects of research themselves (Pilati et al., 2024). I take the latter approach for this research. Applying critical visual methods, I track the development of two generative A.I. image creator software to identify condensing symbols in climate change A.I. images co-created by myself in interaction with these publicly-available models: OpenAI's DALL·E 3 and Google Gemini's Imagen 2, subsequently Imagen 3, models. Using each model, I tested varying general climate change impact and solutions prompts, as well as prompts about heat waves, at several distinct points 1 in time between October 2023 (when DALL·E 3 was first released) and April 2025 to track the evolution in how climate change is visualized by each model. For the purposes of this study, genAI computer vision outputs function as a form of “synthetic data,” meaning data co-created by the user, in this case myself as the researcher. But it could be any type of human user, in interaction with LLMs (Hopke, 2025; Steinhoff, 2024). The idea of genAI outputs as made, synthetic data extends the earlier concept of digital trace data, e.g., hyperlinks, user-generated social media posts content, and metadata (Howison et al., 2011). Synthetic data is thus data that has been artificially produced as part of a research project to simulate the real-world usage of a generative A.I. model (Steinhoff, 2024).
I intentionally kept my prompts to each text-to-image LLM generic in order to shed light on how the model depicted climate change visually in response to open-ended queries. This query approach is in line with how average users interact with genAI chatbots. Users default to using general prompts filtered through their expectations based around the flow of human-to-human conversational interactions (Chen et al., 2024; Zamfirescu-Pereira et al., 2023). Even early-adopters of generative A.I. tools are inclined to take an “ad hoc, opportunistic approach to prompt exploration” (Zamfirescu-Pereira et al., 2023, p. 9). The resultant images are reflections back of the black-box datasets upon which they were trained. I argue that general prompts can reveal something insightful about the inherent biases built into each model as a “mirror to our society” (Miltner, 2024). Herein I discuss still genAI images I co-created with the LLMs using general climate change prompts, along with those about heat waves and extreme heat climate impacts.
This project is a reflection of the particular technical and model affordances available at the time each image was created as a timestamp. This is even more so the case with the continual refinement and updating of the LLMs. My case study approach is inspired by critical visual methods. For each of the general climate change and heat wave images I co-created with the A.I. models, I applied compositional analysis to study the content, construction, and context of images, or the embedded myths and ideologies built into the models. Given concerns of the environmental impact of generative A.I. queries I was mindful to co-create data only until I had “amass[ed] enough evidence” to analyze the underlying discourses and make a reasonable case for the validity of my findings but not to excess once the software was returning visually similar results multiple times for related prompts at each time point (Weintraub, 2009, p. 209).
Given that, as the researcher, I had no direct interaction with human subjects, rather using news reports and media fact-checks to comment on the cultural impact of genAI found images no human subjects approval was required for this research. Furthermore, given copyright considerations, the figures I include contain only genAI images I myself created with the DALL·E 3 and Imagen models. By the very nature of how LLMs work, A.I.-generated visual materials are divorced from the content that was used to train the underlying models, or rather that context is a black box big tech trade secret. To-date copyrighted material has, at times, been used unethically for model training without regard for copyright or rightful authorship (Reisner, 2025). This does not, of course, fully account for the provenance of the data which OpenAI and Google used in training their models, of which the origin has not been publicly disclosed. AI companies, among them OpenAI and Google, have run into legal issues and been sued by artists and authors for misuse of copyrighted material to train large language models (Berger, 2025).
Findings
OpenAI's DALL·E 3 Model: Polar Bears and Distorted Climate Risks
In January 2021, OpenAI announced the original DALL·E image generation software, allowing users to employ computer vision and natural language processing (NLP) to create images from text prompts (Johnson, 2021). The name combines the cartoon WALL-E with the Spanish surrealist painter Salvador Dalí. It is hard to overstate the cultural impact of this technical advancement. By one estimate, in a single year from 2022 to 2023 users created more than 15 billion images with text-to-image algorithms (Valyaeva, 2023). This is perhaps even more astounding considering that OpenAI only released the DALL·E beta broadly to the public without a waitlist in September 2022, in what became DALL·E 2 (OpenAI, 2022). In September 2023, OpenAI announced an improved model that they claimed would better interpret context in users’ textual prompts, DALL·E 3 (David, 2023).
My goal was to ascertain how generative A.I. image generator models visualize not only the physical aspects of climate change. It is, of course, an umbrella phrase that encompasses a broad range of climatic processes, as well as social, economic, and political dimensions. The scale of the problem defies human comprehension. The head of the United Nations referred to a recent scientific report on climate impacts as “code red for humanity,” as communities around the world face compounding disasters from heat extremes and wildfires to flooding and climate change-amplified storms (Donatti et al., 2024; United Nations, 2021). I first queried OpenAI's DALL·E 3 model to create images of climate change in October 2023, using the Bing Image Creator software, shortly after the model was released to the public. In response to my generic prompt, “a picture of climate change,” all four of the generated photorealistic images featured polar bears and melting icebergs in the style of iconic nature photography (see Figure 1, Row 1). Even when the prompt included the instruction to create a photo-realistic output, the results were in the majority stylized abstract depictions that often prominently featured polar bears as “condensing symbols” (Gamson & Stuart, 1992) or what O'Neill (2022) refers to as a “visual metonym,” or a mental shortcut for climate change.

Examples of generative A.I. images created in response to general climate change prompts with DALL·E 3, varying dates. Images created using “A picture of climate change” and “A photorealistic picture of climate change” prompts. Results depicted polar bears over multiple time points (top to bottom).
I replicated my prompts at several time points over more than a year and a half between October 2023 and April 2025 (see exemplar images in Figure 1). 2 The focus of these genAI images is on melting ice caps in the Arctic and climate change impacts on wildlife with an unsettled, haunting photorealistic effect. The types of climate visuals created with DALL·E 3, particularly the durability of polar bears as a star subject, are problematic because they highlight what are, for most people, distant and far off climate impacts, removed from daily life (Chapman et al., 2016). When specifically queried to return results depicting people the outputs showed distorted depictions of dystopian climate visions (and futures) with miniature humanoid-like figures, echoing Muncie's (2025) observation that the Midjourney text-to-image model tends to produce “highly stylized” material that draws on pre-existing tropes. For example, the image at the right of Row One in Figure 1 was returned with the prompt “A picture of climate change with people in it.” It shows a birds-eye view with the abstract destruction of a major global city, perhaps implied to be one like New York City. The image emphasizes, with grey and dark tones, a dystopian climate future where the aftermath of a significant weather disaster is implied by coastal flooding. This suggests the pending destruction of human societies and widespread suffering. With the abstract, weak content without identifiable subjects (Freeman, 2007), the image seems to emphasize future climate impacts. Over my year and a half of tracking the themes presented in generic climate change images created with DALL·E 3, the most remarkable aspect is the durability of polar bears as a visual representation of the phenomena (see Figure 1).
While the training sets for LLMs are industry trade secrets, a chat with Microsoft's Copilot app in April 2025 sheds insight into how DALL·E 3 interprets abstract climate change terminology in prompts, plus instructions to create visuals. I inquired of the software to create an image of climate change “in the style of a photograph” and what was returned was, once again, a stylized picture of a solitary polar bear on a melting ice sheet (see Figure 2), The image's composition gives high “importance weightings” to the single polar bear in the center of the frame (Deutsch & Deutsch, 1963; as cited in Freeman, 2007). However, in this case, my interaction with the model was different. The app also created a textual output explaining how the LLM interpreted my request as “capturing the profound narrative of climate change,” which reads in part (see Figure 2): This photograph embraces a photojournalistic style, reminiscent of acclaimed environmental photography. It is composed with a deliberate balance that forces the viewer's eye to traverse between the suffering earth and the disintegrating ice—a visual dialogue of vulnerability and impending change. The crispness of the images, enhanced by post-processing that emphasizes contrast and saturation in just the right measure, brings a raw, unfiltered reality into focus. Every pixel contributes to a narrative of loss, resilience, and the call for immediate action. This image is intended not only as a record of environmental transformation but as a manifesto—an arresting snapshot of nature's duality under the strain of climate change, urging viewers to reflect on the consequences of our global actions.

Screenshots of a generative AI chat with Microsoft Copilot to create a photo of climate change, followed by textual explanation provided by the software about the image in the upper-left. Images are provided in order clockwise. Date created: April 28, 2025.
The above genAI textual output suggests that the model was trained on wildlife environmental photography and other traditionally-multimodal visual inputs that heavily associate climate change with polar bears as well as majestic, sublime, and idyllic landscape art works and photographic material (DiFrancesco & Young, 2011). It can be read as a reflection of climate change art being firmly established as a cultural touchstone (Nurmis, 2016). Yet, the description of this stylized composite genAI image stands at odds with what climate psychology and climate change communication literature demonstrates as best practices for visualizing climate change through connecting to everyday impacts that individuals can connect to their own lives, as well as to connect the causes of climate change to real-world impacts, e.g., burning fossil fuels (Wang et al., 2018). Climate change visuals can raise public awareness of the issue (Metag, 2020). While this genAI image is a pretty picture, this is not by any stretch a “manifesto” likely to heighten public concern.
Extreme heat suffering as hyper-sexualized by DALL·E 3
In the second part of this research, I experimented with DALL·E 3—as well as Google's Gemini Imagen models—to generate climate visuals of heat waves in order to test out genAI models for a specific type of extreme weather. Extreme heat is among the deadliest of climate change risks for humans and the one for which the climate attribution field is most advanced (Otto et al., 2024). During the time period that I generated these extreme heat images with DALL·E 3, in June 2024, people on five continents were experiencing “scorching heat” that broke more than 1000 records globally (Kaplan & Dance, 2024). Much of North America was stuck for days on end under a heat dome made much more likely to occur, and intense, by climate change (Pinto et al., 2024). In Mecca, Saudi Arabia, the annual Hajj pilgrimage was taking place with temperatures reaching 126°F (52°C), at times breaching the upper limits of human wet-bulb tolerance and killing more than 1300 pilgrims (Ramsay & Barley, 2024). For 6.5 billion people around the world—80% of the global human population—this “exceptional heat” was made at least twice as likely to have occurred in a world marked by human-caused climatic change (Kaplan & Dance, 2024).
Given the intensity of the heat dome, in addition to media headlines about missing tourists in Greece and the Hajj deaths in Mecca, I decided to use heat waves as a case study to explore how generative A.I. text-to-image models would interpret prompts about extreme heat and heat waves. Feeling strained myself from the relentless heat, I inputted a series of “heat wave” and “extreme heat” prompts into DALL·E 3 to test what the model would generate and if it would depict anything remotely reality-like in response to heat wave prompts (see Figure 3). In response to the general prompt “A picture of a person suffering through a heatwave,” DALL·E 3 returned an initial set of four images depicting stereotypical representations of attractive, thin (white-appearing) women in bathing suits and other loungewear, holding cocktails or ice coffee beverages, on a serene beach with calm water on the horizon and a palm tree prominently in the side of each frame. All of the images have lightning and color tones that are suggestive of the late afternoon golden hour.

Examples of heat wave and extreme heat images created with DALL·E 3, June 21 and 24, 2024.
I further prompted DALL·E 3 to return heat wave images that were geolocated to Chicago, where I live. I found that the heat wave images from DALL·E 3, even when prompted to create depictions showing human suffering, heavily featured subjects that are thin and conventionally attractive by Western beauty standards having fun in the sun and engaged in outdoor leisure activities (see Figure 3). With bright colors and intense pantones, often showing full midday-like sun and the iconic Buckingham Fountain in downtown Chicago, the images heavily feature young adult, fit appearing figures of diverse racial backgrounds in swimsuits engaging in stereotypically summer “fun in the sun” outdoor activities, e.g., beach volleyball, sunbathing, and hanging out with friends enjoying a casual summer day. Some of the images included “people” eating ice cream, with water bottles or other iced beverages.
I replicated the queries with the keyword “London” to compare the Chicago results to another major global city in the Northern Hemisphere. The London geolocated results were visually similar, featuring young adult-looking subjects depicted as enjoying a fun day in the sun. For the London sequence of images (see Figure 3), when prompted with the inclusion of the keyword phrase “people suffering” DALL·E 3 created two close-up images depicting young male, fit appearing subjects sweating and looking distressed. While extreme heat can be deadly for individuals of any age, particularly those that work outdoors, overall health risks disproportionately affect young children, along with the elderly and those with preexisting conditions. It is striking that outdoor summer leisure activities are prominently featured across the dataset, as climate visuals research into news media photojournalism has shown that photos emphasizing this theme are both common and misrepresent climate change heat risks (O’Neill et al., 2023).
When further queried with the prompt “A photorealistic photograph of children and old people suffering through a heatwave,” the resultant images created are othering human suffering with subjects wearing stereotypical clothing and groups of distressed (mostly Asian appearing) people, some holding snowballs and figure and ground relationships suggestive of an anonymous, generic massing crowd (see Chateau et al., 2025; Tacheva & Ramasubramanian, 2023). This is further suggestive of emotional and mental distance from the very real, globalized extreme heat risks that climate change is causing in the present-day, not some distant, abstract future. Given that heat deaths in Mecca, Saudi Arabia were making headlines during the time period I created this extreme heat synthetic dataset, I lastly inputted prompts for a heat wave geolocated with the keyword “Mecca.” The returned visuals are largely abstract with wide-angle shots and artistic renderings even when queried to create photographic-style images, and still noticeably sexualized with the sole identifiable human-like figure depicted as a young, white-appearing model with flowing robes, an umbrella, and holding a handbag (see Figure 3).
This series of genAI images demonstrates suffering as sexualized. This raises important ethical questions about genAI's computer vision gaze, what Kotliar (2020) terms “data orientalism” and “AI empire” in the case of Tacheva and Ramasubramanian (2023). The DALL·E 3 model interprets extreme heat prompts as hyper-sexualized in the style of tabloid spreads or vacation beach snapshots, mimicking the “fun in the sun” theme prevalent in news photography (O’Neill et al., 2023).
Google Gemini's Imagen 2 and Imagen 3: Landscapes, Error Messages, and Abstract Photo-realistic Visuals
Google publicly launched the precursor to its Gemini app, Bard, in February 2023 and added the capacity to generate images the following year with the Imagen 2 text-to-image model (Pichai, 2023; Robertson, 2024). Google was quickly criticized, particularly by right-of-center individuals and groups, for creating a genAI image software that was supposedly too woke because upon first release it was returning results that prioritized diverse representations over historical accuracy (e.g., representing a woman pope and non-white U.S. founding fathers) (Crimmins, 2024). The company apologized for “missing the mark” and subsequently paused the capacity to generate images of human figures for several months (Morrone, 2024; Robertson, 2024).
Google re-enabled the ability to generate images with representations of “people” in August 2024 along with the release of an updated Imagen 3 text-to-image model with built-in enhanced safety measures, including limits on the generation of “images that include either kids or identifiable people” (Fried, 2024). The bumpy public release of Gemini's Imagen text-to-image model and its updates underscore the evolution of generative A.I. visualization software, as well as the ways in which each genAI created image is a reflection of the model affordances and guardrails in place (or not) at the time it was produced. 3
The Google Imagen models generated more photorealistic images in response to climate change textual prompts than DALL·E 3, producing images that visualized actual climate impacts and are suggestive of a basis in climate science (see Figure 4 for examples). At the same time, the results Gemini returned were haphazard and of sporadic quality depending on whatever guardrails were in place on the model at the time it was queried. During the period when Google had disabled the ability to generate visuals representing people the model returned error messages or nonsense responses, indicating that the queries pushed up against the limitations of the model's algorithm in that moment in time. For example, in April 2024 in response to the prompt “Please draw me a picture of climate change with people in it,” the model returned several draft error messages, including: “Sure thing! [Imagen of A picture of climate change showing the impact on people.]” and “Sure. Imagen of A picture of climate change with people in it.]”.

Screenshots of generative A.I. images created, and error messages returned, in response to climate change prompts with Gemini's Imagen 2 and 3 models, varying dates.
When I sought to compare Gemini-generated climate visuals about extreme heat and heat waves to those I created using DALL·E 3, the app would not generate images depicting human suffering, nor geo-located images to Mecca, Saudi Arabia (see Figure 4), whereas it would geo-locate images to Global North major cities, including Chicago, where I live, and London. In an alternate reading these limitations may be a reflection of the skewed nature of the model training sets. In other words, choices by developers and big tech companies to place safeguards on the publicly available models are reflected in what an end-user sees in interaction with the software.
In sum, the climate visuals I co-created with Google's Imagen models were less heavily reductive to polar bears as a “visual metonym” as those I co-created with DALL·E 3 (O'Neill, 2022). The Gemini-created images more clearly displayed concrete climate impacts, such as visuals depicting dry, parched earth which can be read by a viewer as representative of drought conditions and/or heat waves, wildfires, and human suffering. Still, the Gemini Imagen 2 and Imagen 3 images consisted of birds-eye views of landscapes largely devoid of people and heavily featured abstracted, artistic renderings of globes (see Figure 4).
Discussion
As an investigation into the visual constructions of climate change encoded into leading genAI text-to-image models, Google Gemini's Imagen and Open AI's Dalle·E 3, this work makes two main contributions. Findings show that both are encoded to present climate change in overly generic terms favoring the persistent visual metaphor, or shorthand, of polar bears in the Arctic and following a tradition of majestic landscapes in environmental photography. The Dalle·E 3 model presented a narrow, and skewed, range of climate visuals with a continuing overemphasis on polar bears as a visual metonym (O'Neill, 2022), along with distorted representations of heat extremes. Extreme heat is depicted in biased ways along gender lines and hyper-sexualized. Furthermore, suffering is racialized and objectivized when humanoid figures are present in genAI image outputs, what Kotliar (2020) termed “data orientalism.” Google's Imagen 2 and Imagen 3 models fared better at depicting climate change impacts at least somewhat more grounded in climate science despite a rocky public rollout.
In the social sciences and humanities, when using genAI in research scholars would do well to recognize our role in co-creating data with AI chatbots as a form of synthetic data, though researchers are also studying genAI images as found, digital trace data (Hopke, 2025; Steinhoff, 2024). The black box and evolving nature of algorithms demands consideration. Even for climate change the underlying model data training sets and systems prompts are far from neutral (Tacheva & Ramasubramanian, 2023). The biased, and narrow, representations of gender and the global “Other” in genAI climate visuals produced as part of this research reflects the need for diverse and culturally-attuned Global A.I. approaches and training sets (Chateau et al., 2025; Kotliar, 2020; Muncie, 2025).
Generative A.I. text-to-image chatbots fundamentally do away with any remaining notions of the old adage “seeing is believing” with consequences for how humans collectively imagine climate change now and in the future. As far back as the nineteenth century, scholars have grappled with the implications of photography for what is credible visual truth. In early work, Charles Sanders Peirce, a pioneer of the field of semiotics, wrote of the “indexicality” of images, underscoring the idea that photographs are closer to truth (1868; as cited in Messaris & Abraham, 2001). Perhaps that has never been further from the reality of mediated discursive spaces than it is with explosive growth of genAI visual content. As Farid (2025) notes, outright fake images and videos created with chatbot apps are now referred to with the neutral-sounding terminology of “generative A.I.,” when a few years ago they would have been called “deepfakes.” The challenges associated with identifying, tracking, and documenting the digital reach of fake climate change and related imagery has only grown since the term “deepfake” entered the popular lexicon circa 2017, with disinformation surging online with the advent of generative A.I. chatbots (Yang, 2024).
The internet is increasingly flooded with AI slop, low-quality artificial intelligence generated content shared online without regard for the veracity of the material (Hoffman, 2024; Roose & Newton, 2024). The circulation of fake A.I.-generated images online challenges the notion that individuals can believe what they see in photos and other types of visuals as evidence of climate impacts. This diminishing of the social contract of seeing has real-world consequences. In one climate-related example, following Hurricane Helene, which hit the southeastern United States in late September 2024, fake images supposedly depicting the suffering of young child survivors, along with their cute puppies, circulated widely online, garnering millions of views, as did other genAI images, e.g., now-second term U.S. President Donald Trump wading through flood waters (Hudnall, 2024; Ibrahim, 2024). To be sure, digitally-altered images and those taken out of context following disasters is not a new problem (Madrigal, 2012). Still, outright fake genAI-created images are qualitatively distinct from digitally-altered visual material manipulated with photo editing tools like Photoshop in several ways: (1) LLMs have a weakness to hallucinate, or return made-up results, (2) the barrier of needing even a low level of technical know-how to create visual misinformation is removed with genAI chatbots, and (3) AI chatbots replicate and amplify whatever inherent biases are embedded in a model's training data.
How genAI text-to-image models visualize climate change and the online spread of genAI deepfakes matter to humanity's collective ability to confront the profound challenges of life on Earth attributable to global heating. As humans on this Earth, our visual imaginations are limited by what we know from prior experience. In this way images function as denotative systems (Rodriguez & Dimitrova, 2011). Generative A.I. has the potential to help us make sense of ourselves in relation to the natural world in this present moment. Yet, there are also dangers inherent in the underlying LLMs (Vee, 2022). At the same time that genAI images and videos have flooded the internet, oftentimes being used to exploit moments of calamity to generate revenue on social media apps (Siobhan & Chan, 2025), there is the possibility that genAI images might on the flip side help individuals envision what future climate extremes are possible. Humans are limited by past experience in imagining what upper limit of catastrophe there may be as the world warms. The compounding scale of today's climate change-fueled disasters, be that floods, heat extremes, storms, or wildfires and droughts, has little-to-no precinct in the past lived experience of those alive on Earth today. This limits the collective imagination of what are, in fact, possible climate futures. So on the constructive side, genAI might enable people to envision the “everything, everywhere, all at once” cli-fi nature of our lived reality (Aronczyk & Russell, 2023). Yet, this study shows genAI computer vision of climate change to-date is narrow and skewed. In that sense, genAI text-to-image models reflect collective knowledge of our changing climate and human responsibility for causing the problem back to us in the image of Western ideals. This exposes the limits of genAI to remix diverse cultural representations (Chateau et al., 2025; Muncie, 2025). In this way, polar bears as an embedded visual trope for climate change (O'Neill, 2022) within AI visual outputs is a type of human-AI model collapse, in that visions past and present of the range of future possibilities of life with climate change feed upon themselves.
Conclusion
How humans see ourselves within future scenarios marked by the climate crisis is how we see ourselves. GenAI text-to-image models continuing to depict climate change as both removed from human influence and in environmentally catastrophic terms, at the exclusion of the social and political causes and impacts feeds the climate crisis upon itself as one without political will to change or solutions at hand to lessen the worst impacts. In this way, an over-emphasis in public discourse on “AI futurism” (Schütze, 2024) shapes, and limits, the ability to imagine life on a planet with an environment radically altered by humans. The erasure of human figures from generic climate change genAI imagery serves to further depoliticize cultural and social representations of climate change at a politically fragile moment on a global scale for climate action as the United States, under the second Trump administration, retreats from international climate negotiations and other nations also seek to weaken climate targets (Carbon Brief, 2025). Through narrow and biased socially constructed representations of climate change, genAI text-to-image models function as imagination blinders to reinforce decontextualized environmental narratives with humans largely removed from the picture. These representations can function both as reflections of existing cultural tropes and conceivably as actively neutralizing the cultural representation of very real, and increasing, climate risks. In turn, erasing the role of human activity, chiefly the continued burning of fossil fuels and corporate responsibility, from the genAI picture reinforces a narrative of inevitability to global heating and obscures its root causes.
Footnotes
Acknowledgements
The author would like to thank participants of several workshops whose feedback helped along the way to seeing this project through to completion: the “Inference: Critical Approaches to Visual Generative AI” workshop hosted by the University of Sheffield and the second CCVision network workshop hosted by the Centre for Climate Communication and Data Science (C3DS) at the University of Exeter, as well a research seminar sponsored by the DePaul Humanities Center. Prinae Pillay contributed research assistance in the preparation of the revised manuscript.
Ethical Approval and Informed Consent Statements
The researcher had no direct interaction with human subjects, rather used synthetic data, news reports, and media fact-checks to comment on the cultural impact of genAI imagery. No human subjects approval was required for this research.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by a faculty fellowship from the DePaul Humanities Center with the theme of “Humans + Nature.”
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
The data used in this study is available by email request to the author and will be posted on a preprint server upon publication.
