Abstract
Large language models (LLMs) have emerged as a novel form of media, capable of generating human-like text and facilitating interactive communications. However, these systems are subject to concerns regarding inherent biases, as their training on vast text corpora may encode and amplify societal biases. This study investigates overestimation bias in LLM-generated climate assessments, wherein the impacts of climate change are exaggerated relative to expert consensus. Through non-parametric statistical methods, the study compares expert ratings from the Intergovernmental Panel on Climate Change 2023 Synthesis Report with responses from GPT-family LLMs. Results indicate that LLMs systematically overestimate climate change impacts, and that this bias is more pronounced when the models are prompted in the role of a climate scientist. These findings underscore the critical need to align LLM-generated climate assessments with expert consensus to prevent misperception and foster informed public discourse.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
