Abstract
Humans often display a truth-bias—the perception that others are honest independent of message veracity—but does this phenomenon extend to generative artificial intelligence (AI)? We had humans and large language models make nearly 1,000 veracity judgments across different prompts. Human detection accuracies were near chance (50%–53%) with notable truth-biases (59%–64%); AI had a substantially greater truth-bias than humans (67%–99%). GPT-4 was also truth-default, not suspecting deception when veracity assessments were unprompted. Together, people and AI judge most information to be true.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
