Abstract
Fake news negatively impacts business operations across various sectors, including healthcare, retail, and finance. One promising way to mitigate this problem is through automated content moderation using machine learning (ML). However, existing methodologies have struggled to identify fake news due to its inherent challenges—namely, class heterogeneity, concept drift, context dependency, and adversarial attacks. To address these challenges, we propose a novel strategy that focuses on the heterogeneous intentions behind fake news. Our approach builds on insights from prior social science research indicating that the intent behind creating fake news may differ from that behind disseminating it. This intent-driven variation influences the syntactic structures of fake news, leading to its heterogeneous nature. Accordingly, we argue that developing an ML model focused on the intent dimension of fake news can help overcome the issue of class heterogeneity, which often prevents effective model convergence. Moreover, since intent-driven linguistic patterns are generally less susceptible to temporal shifts and manipulative alterations, we expect this focus to enhance robustness against the other key challenges—concept drift, context dependency, and adversarial attacks. Grounded in a design science approach, we therefore propose a novel ML framework for fake news detection that centers on identifying and leveraging the intent behind fake news. Specifically, we first develop a domain adaptation model to infer the intent behind fake news, specifically targeting contexts where sufficient training data is unavailable—a common limitation in intent detection. Then, based on the intent detection model, we redefine fake news detection by shifting from binary (fake vs. true) to ternary classification (deceptive fake news vs. non-deceptive fake news vs. true news), thereby reducing the computational burden on ML models tasked with processing heterogeneous information within the fake news category. When evaluated against state-of-the-art baselines, our approach demonstrated significant improvements in both intent detection and fake news classification. Robustness checks further demonstrate that our method is less sensitive to distributional shifts in data and is more computationally efficient than baselines. Moreover, post-hoc analyses reveal the underlying mechanisms through which incorporating intent into fake news detection enhances model performance, addressing the key challenges associated with fake news.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
