Abstract
The spread of false news has hurt both individual practitioners and the media. To enhance the efficiency of false news detection, this study constructs a multi-modal news detection model. The model includes a text encoding module, a contextual semantic encoder, a news propagation encoder, and a false news detection model that integrates semantic features and image recognition. In the results, the multi-modal model showed significantly higher accuracy and F1 score in detecting false news than the unimodal model. Its accuracy and F1 score improved by an average of 7.57% and 7.34% on the POL and GOS datasets, and 7.20% and 6.38% on the WEIBO and TWITTER datasets. In addition, hyperparameter analysis showed that the model performance reached its optimum when the parameters r and k were adjusted to their optimal values. The ablation experiment further validated the importance of the channel attention mechanism and graph comparison method in improving model performance. The results indicate that multi-modal models have significant advantages in detecting false news and can effectively utilize information from different modalities to improve detection accuracy. This study is meaningful for evaluating the reliability of false news information and the media’s credibility in society. Although certain achievements have been made in the research, there are still some limitations. For example, the model may have generalization issues when tested on specific datasets, and the complexity of the model may make deployment difficult in resource-constrained environments. Future work will explore simplified versions of the model and conduct tests on more diverse datasets to enhance the model’s generalization ability and practicability.
Get full access to this article
View all access options for this article.
