Abstract

AI-generated misinformation and fact-checking approaches
The thematic articles in this issue collectively examine the growing challenge of AI-generated misinformation—its creation, dissemination via social media, and impact on political discourse and public trust—alongside the struggle of fact-checking and detection methods. This rapidly evolving landscape, where AI-generated misinformation blurs truth and falsehood, erodes traditional sources of epistemic authority (Shin et al., 2025), making trustworthy information difficult to discern and underscoring the urgency of effective countermeasures. For instance, Cazzamatta and Sarisakaloğlu (2025) provide empirical data on AI-generated misinformation trends across different countries, noting country-specific variations in topics and intentionality, such as political motives being prevalent across the United Kingdom, Germany, and Brazil, while profit-driven motives are particularly strong in Brazil.
A central theme is the transformation of fact-checking. Traditionally, fact-checking was a procedural or source-based process, deeply embedded within journalistic production, aimed at ensuring the accuracy of published information through verification against authoritative sources like government records and expert testimony. However, the rise of digital media ecosystems, characterized by fragmented and algorithmically amplified information flows, has shifted fact-checking's role. It is now viewed as a more complex, pluralistic mode of civic epistemology, functioning as an “epistemic intervention” and “sociotechnical labor” (Shin et al., 2025). AI fact-checking systems are seen as a form of “epistemic infrastructure,” using technical components like knowledge graphs and labeled datasets as “epistemic anchors” for verification tasks. Algorithms perform “epistemic triage,” deciding which claims are verified and which sources are credible (Shin et al., 2025). This evolution means fact-checking is no longer confined to traditional newsrooms but operates across distributed setups, including platforms and tech firms. Cazzamatta and Sarisakaloğlu (2025) implicitly highlight this evolution by analyzing how fact-checkers are adapting their practices specifically to address AI-generated misinformation.
A major challenge highlighted is the difficulty fact-checkers face in detecting and debunking AI-generated misinformation. Current AI detection tools are not widely adopted or consistently applied by fact-checkers, partly due to their limited accuracy and the fact that they provide probabilistic rather than definitive conclusions (Cazzamatta & Sarisakaloğlu, 2025). Comparing manipulated content with original material and contextualizing the misuse of AI are frequent debunking strategies. Tools like Reverse Image Search are also employed. However, the increasing quality of AI-generated content, particularly audio and content related to distant or under-documented events, makes detection more complex. Gondwe's (2025) study on AI models for detection shows that while advanced models like BERT and GPT perform better than traditional methods, they face challenges, and explainability remains an issue.
A critical common thread is the acknowledgement that both human and AI systems involved in information processing and fact-checking are subject to biases and limitations. Shin et al. (2025) mention the embedding of values into infrastructure. Gondwe (2025) notes algorithmic bias as an ethical concern and discusses limitations in generalizing findings from English-language data. Mandava et al. (2025) specifically uncover and analyze a “geo-political veracity gradient” within news data that affects the performance of AI models trained in different regions, noting a dominance of Global North contexts. This data bias impacts computational tasks like fake news detection. This underscores that technological solutions are not neutral and require critical evaluation.
A few authors explored interventions aimed at helping users navigate the increasingly complex information environment. Li et al. (2025) specifically examine AI-generated content (AIGC) labels as a “nudging intervention.” Their study finds that AIGC labels are feasible and replicable methods that can help users distinguish AI-generated content from human-generated content. The presence of these labels does not significantly reduce user trust in the content or platforms. Labels can also help address digital inequality by enhancing users’ understanding of AIGC. Shin et al. (2025) also mentions fact-check labels and credibility signals as design elements that influence how users interpret credibility and trust in algorithmic systems. Cazzamatta and Sarisakaloğlu (2025) note that fact-checkers sometimes include external links to guides on identifying AI-generated content as part of their debunking strategies.
The five thematic articles form a coherent narrative arc, moving from the broad theoretical framing of the problem to empirical observation, technological solutions, user-centric interventions, and a critical analysis of systemic biases.
Shin et al. (2025) set up the fundamental problems and context for the issue by describing the current era of contested truths and the challenges posed by algorithmically amplified misinformation to traditional epistemic authority. It frames fact-checking as a necessary sociotechnical labor and an epistemic institution adapting to the AI age, providing the theoretical grounding and posing the “what” and “why” of the problem, thereby setting the stage for subsequent articles to explore the “how” and “where.”
Cazzamatta and Sarisakaloğlu (2025) provide concrete, empirical evidence of AI-generated misinformation in action. Their article documents emerging trends, types of AI elements used (like deepfakes), and examines how human fact-checkers are currently detecting and debunking this content in specific countries (Brazil, Germany, UK). They vividly illustrate the practical, real-world manifestation of the theoretical problem described in the article of Shin et al. (2025) and highlight the limitations of current human-led methods or reliance on identifying visual anomalies in the face of increasingly sophisticated AI. Their work also identifies the need for tailored media literacy, connecting to the user focus in Li et al. (2025).
Gondwe (2025) directly addresses the potential for AI to serve as a solution. The study evaluates the effectiveness of various AI algorithms and models for detecting misinformation in real time, providing technical details and performance metrics needed to tackle the scale and speed of misinformation propagation. Deep learning models show promising results but require broader linguistic and cultural datasets for generalizability. The study also emphasizes the importance of Explainable AI (XAI) to build trust and improve feedback loops, linking to Shin et al. (2025) discuss on trust-building interfaces and transparency in algorithmic systems.
Li et al. (2025) shift the focus from system design (Gondwe, 2025) and human fact-checking practices (Cazzamatta & Sarisakaloğlu, 2025) to the end-user. They investigate the impact of AI-generated content (AIGC) labels on how users perceive information credibility, particularly considering their prior experience with AI and the content category (commercial vs. nonprofit). Their study treats labeling as a “nudging intervention” that influences user behavior. People who are less familiar with AI perceived non-profit content to be more credible. The findings underscore the need for user education. This directly ties into the concept of trust-building interfaces and epistemic cues proposed by Shin et al. (2025) that guide user interpretation and trust.
Mandava et al. (2025) examine biases within the datasets and AI models used for fake news detection systems discussed in Shin et al. (2025) and Gondwe (2025). Their article specifically identifies and analyzes the “geo-political veracity gradient,” a tendency that news from the Global South reporting on Global North topics is more likely to be truthful due to differing economic incentives. This gradient creates challenges for AI fake news detection models trained in the Global North but applied in the Global South, often leading to more false negatives. This article serves as a crucial reminder that the technological solutions (Gondwe, 2025) developed within the algorithmic infrastructures (Shin et al., 2025) and presented to users (Li et al., 2025) are not globally neutral and can embed biases, potentially leading to friction or reduced utility when applied in diverse contexts. It complicates the efforts to reassert truth and build trust mentioned in other articles of this issue by highlighting embedded inequities.
While the preceding articles delve into the critical issue of AI-driven misinformation and its countermeasures, the subsequent contributions broaden this perspective, examining the transformations that digital platforms and AI systems are imposing on knowledge, culture, and selfhood, often guided by underlying economic forces.
The platformization of knowledge, culture, and selfhood, driven by AI and economic imperatives
The other articles in this issue discuss how digital platforms and artificial intelligence (AI) are transforming social and professional life. They demonstrate how platforms and AI drive significant sociotechnical transformations, reshaping scholarly practices, cultural dissemination, personal expression, and raising important ethical and economic questions. They depict digital platforms as powerful new infrastructures and organizations that restructure existing systems. This is evident in scholarly communication, where Academic Social Networking Sites (ASNS) provide infrastructures that challenge traditional publishing and evaluation processes (Köchling, 2025). Similarly, Douyin is shown to transform from a social network into a cultural industry, leveraging its platform to promote intangible cultural heritage (Paquienseguy & Guo, 2025). Digital platforms also serve as spaces for self-presentation and social interaction, influencing how individuals curate their identities and share intimate experiences (Lan & Huang, 2025).
The economic logic of platforms is a connecting thread, particularly in Köchling (2025) and Paquienseguy and Guo (2025). ASNS rely on network effects, data commodification, and commercialization, while Douyin focuses on traffic generation and the valorization of content and key figures, supported by Multi-Channel Networks (MCN) agencies and brand partnerships. The pursuit of profit in these platform models can intersect with the ethical concerns raised in Wang (2025), which notes that corporate pressures often prioritize profit over ethical safeguards in AI development.
Lan and Huang (2025) examine how users publicly share intimate experiences with AI chatbots, such as Replika, on social media platforms like Douban. Their work addresses how these human-AI relationships are performed and shared publicly, treating social media as a critical “stage” for curated self-presentation. Users engage in sharing for self-affirmation, seeking validation and recognition. This process involves regulating behavior to manage public perception and navigating vulnerability, which is often intertwined with narcissistic tendencies, using external approval to shield against insecurities.
Köchling (2025) analyzes how platforms like ResearchGate and Academia.edu function as new infrastructures and organizations for scholarly communication. She argues that these platforms, driven by network effects, data monetization, and algorithmic control, represent a distinct organizational format for scholarly exchange, creating both opportunities for greater visibility and potential issues related to commercialization and data ownership.
Paquienseguy and Guo (2025) analyze how Douyin (TikTok in the West) is transforming from a social media platform into a cultural industry by promoting China's Intangible Cultural Heritage (ICH). Leveraging AI algorithms and supported by national policies, Douyin professionalizes vloggers and collaborates with ICH inheritors, cultural institutions, and MCN to curate and disseminate content. This ecosystem integrates heritage with digital commerce, using strategies like brand partnerships and content monetization to drive traffic, validate ICH, and support economic development.
Wang's (2025) book review on “Artificial Intelligence, Strategic Communicators, and Activism” argues that strategic communicators must transition from being advocates for AI to critical activists. The book highlights the potential negative impacts of AI, such as societal inequalities, environmental harm, and ethical complexities, emphasizing that communicators should use their expertise to mitigate these threats and advocate for healthier communities.
Overall, all nine articles concern how AI transforms digital platforms and the resultant impact on various facets of information, communication, and cultural processes. These technologies are not just tools but are depicted as shaping complex sociotechnical systems and infrastructures. They emphasize the role of user interaction and user perception within these platformized social environments. User trust is influenced by interface design and content labeling. Digital platforms, which are AI-enabled, serve as spaces for self-presentation and negotiation of identity, even in human–AI relationships. Underlying these developments is the economic logic of platforms, characterized by network effects, data commodification, and algorithmic control, which shapes content visibility and user behavior. Furthermore, geopolitical context and public policy are shown to play a significant role in shaping platform strategies and the types of content promoted or issues addressed.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
