Abstract
This study compares the effectiveness of artificial intelligence (AI)-aided versus human-assisted fact-checking for different types of news, namely hard and soft news. To this end, an online experiment was carried out separately in Korea and the United States, employing a 2 (news type: hard news vs. soft news) × 2 (fact-checker: AI vs. human experts) between-group factorial design. Findings suggest that AI’s fact-checking, with its perceived objectivity and lack of bias, is potentially more effective for straightforward hard news, while human-assisted fact-checking appears to be more suitable for addressing misinformation within soft news, which involves more subjective aspects of information. Additionally, the study proposes that AI’s verification of hard news and human experts’ examination of soft news might trigger divergent indirect mechanisms—systematic processing driven by cognition or heuristic processing influenced by emotion—to counteract misinformation.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
