Abstract
This paper explores the European Union's multifaceted response to the pervasive issue of disinformation, a challenge that has intensified since the annexation of Crimea in 2014. Disinformation poses significant threats to democratic processes and public welfare. The European Union's approach combines regulatory measures, strategic partnerships, and media literacy initiatives to address this phenomenon while safeguarding core democratic principles, such as freedom of expression. Key measures include the Code of Practice on Disinformation and the Digital Services Act, which aim to hold digital platforms accountable and ensure transparency. Furthermore, initiatives such as the East StratCom Task Force and the Rapid Alert System highlight the European Union's efforts to counter disinformation as a tool of hybrid warfare. This paper also emphasizes the critical role of citizens, whom the European Union seeks to empower through media literacy programs, enabling them to recognize and resist manipulative content. By examining the interactions between government actions, private sector involvement, and citizen engagement, this study provides a comprehensive analysis of the European Union's strategy against disinformation and assesses the challenges and future directions necessary to sustain democratic resilience in an evolving digital landscape.
comprehensive analysis of the EU's strategy against disinformation; effective instruments aim to hold digital platforms accountable and ensure transparency; challenges and future directions necessary to sustain democratic resilience in an evolving digital landscape.
Introduction
In 2014, following Russia's annexation of Crimea, the European Union (EU) began to tackle one of the thorniest issues facing modern democracies: the fight against disinformation. This is defined as “verifiable false or misleading information that is created, presented, and disseminated for economic gain or to intentionally deceive the public, which may cause public harm. Public harm includes threats to democratic processes as well as to public goods such as the health, environment, or security of Union citizens” (European Commission, 2018a).
Although the EU has established a definition of disinformation, this study reveals that reality is far more complex. Determining which information qualifies as disinformation and which falls under the legitimate exercise of freedom of expression is often challenging. This ambiguity is one of the main challenges facing the European Union today. The dissemination of inaccurate or misleading information with the intention of deceiving the public or causing harm is just one of the many far-reaching effects of disinformation. Disinformation threatens social cohesion, democratic processes and the stability of public welfare, which is why efforts to identify and counter disinformation are at the heart of the EU's plan to protect democratic integrity. However, fighting disinformation is difficult not only for its scale but also for its nature. While the digital platforms make it easier for people to obtain information and engage in public conversation, they also make it easier for harmful and manipulative content to spread quickly.
This paper explores the EU's multifaceted approach, which includes creating frameworks and standards for recognizing disinformation, defining regulations, and involving the public and private sectors in cooperative solutions. In addition to legislation such as the Digital Services Act (DSA) and the Code of Conduct on Disinformation, which hold digital platforms to a higher standard of accountability, the EU has established new tools such as the Rapid Alert System (RAS) and the East StratCom Task Force and formed international partnerships to combat disinformation as a hybrid threat. These tools underline the importance of striking a balance between freedom of expression, state intervention and the open market principles that underpin EU values.
Furthermore, the crucial role that EU citizens themselves play is essential in this battle to disinformation. It is essential to enable people to critically explore the digital information ecosystem to both detect disinformation and strengthen democratic resilience. Through the implementation of initiatives that promote media literacy, the EU hopes to foster an informed and involved citizenship that can fend off deceptive content. In fact, citizens are both the main targets of disinformation and the main defenders of a democratic society. Thus, this analysis evaluates the challenges and efficacy of the current and developing EU strategies while examining the interrelated roles of institutions, citizens, and platforms. The analysis has been carried out within the AI4Debunk 1 project founded by the European Commission, aiming at contrasting disinformation by providing citizens with a comprehensive set of fact-checking resources to navigate the digital media landscape more consciously and make informed decisions.
The following paragraphs review the most significant initiatives promoted by the European Union, also including the other main actors in this battle: the large digital platforms, which over the years have gained more and more power, and the citizens, who, although they have taken a leading role in the new society created by these platforms, also appear to be partly subjugated by their protagonism. However, the path proposed here cannot begin without analyzing the complex context in which these initiatives develop, to understand their nature and have a clearer understanding of the current situation.
Information Manipulation and Democracy
“What's on your mind?”. This is the question that 3 billion people around the world see daily on their Facebook walls, a question they often answer, exercising their freedom of expression. It is precisely this question, and the ability to answer it, that carries a crucial issue democratic political systems must confront in their fight against information manipulation.
Let us start at the beginning. At the dawn of the digital age, governments were reluctant to impose strict regulations on the expansion of online platforms because of a desire to encourage technological progress and a strong belief in a fundamentally democratic and pluralistic Internet. Companies like Google, Amazon and Meta have been able to grow and establish their dominant market positions by taking advantage of rules that existed in traditional industries albeit not effectively applied to them. This situation inevitably translated into political influence. The growing influence of digital platforms and their collateral effects led governments to recognize the need to regulate the power wielded by these platforms (Hassan & Pinelli, 2022). The regulation of the digital sector touches on many aspects of contemporary society. We will focus on the communication domain, specifically on the challenges the European Union faces in attempting to develop an appropriate regulatory framework for this sector.
Despite the European Constitution guaranteeing every individual the right to express their opinion, the ability to reach a broad audience was for many years the privilege of a few people, giving them the power to influence and shape public opinion. This made the regulation of the information sector a shared responsibility between the State and the media sector (Pitruzzella, 2018). In this regard, the European Court of Human Rights (ECHR, 2021), within the scope of Article 10 of the European Convention on Human Rights, stipulates that the protection granted to journalists in reporting on matters of public interest is “conditional upon the fact that those concerned act in good faith based on accurate facts and provide ‘reliable and precise’ information in accordance with journalistic ethics”. Therefore, ECHR case law highlights the importance of setting limits on freedom of expression to ensure the right to information, whose abuses can lead to criminal sanctions, such as in cases of defamation. Another key element in traditional information regulation, based on the state's role and the private traditional media sector, was ensuring informational pluralism, enshrined in Article 11 of the Charter of Fundamental Rights of the European Union, which states in paragraph 2, “The freedom and pluralism of the media shall be respected” (European Union, 2012).
The paradigm of communication has been significantly restructured by the emergence of digital platforms as new information media. In the past, the information paradigm was characterized by a one-way flow in which a single sender disseminated information to a large audience. In today's digital world, this paradigm has evolved into a “many-to-many” (n ↔ n) communication model, where every user can be both a sender and a receiver of messages. Although the growing diffusion of social networks and digital platforms undoubtedly has contributed to promoting informational pluralism, the lack of adequate regulation has simultaneously facilitated the massive emergence of illegal content, hate speech, and, not least, an unprecedented proliferation of information manipulation. Focusing on the latter, it is evident that this phenomenon has urgently entered the European political agenda. In fact, digital platforms have become essential spaces for shaping public opinion. Consequently, the distortion of information flows entails an intentional attempt to influence public opinion, with significant repercussions on states’ political agendas (Giusti, 2023). This problem is certainly not limited to the European Union but transcends its borders, much like the vehicle through which it travels. While totalitarian regimes more easily can defend themselves through direct control of communication channels, democratic systems must necessarily find effective solutions without compromising fundamental individual rights, chief among them being freedom of expression.
This right is guaranteed by Article 11 of the Charter of Fundamental Rights of the European Union and Article 10 of the ECHR. These articles reflect what is established by the First Amendment of the American Constitution. It is precisely the First Amendment of the American Constitution that Facebook CEO Mark Zuckerberg evoked in 2019, by Facebook CEO, Mark Zuckerberg, in response to growing pressures for increased accountability of platforms in managing disinformation. Zuckerberg stated: “In principle, in a democracy, I believe that people should decide what is credible, not tech companies”. Yet, despite such claims, in the same period Facebook had already employed around 35,000 fact-checkers to verify the accuracy of information shared on the platform during the same period. This suggests that the appeal to the First Amendment was primarily a strategy to maintain a distance between digital platforms and public authorities’ intervention—a strategy that can be applied by critics of platform regulation to protect the possibility of information manipulation itself (Hassan & Pinelli, 2022).
Delving into this right, Article 10 of the ECHR reads: “Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers”. This article clearly shows that freedom of expression, as enshrined in both the ECHR and the Charter, can be considered in part a negative right, meaning that it imposes an obligation of non-interference on the state. Therefore, direct action by public authorities against disinformation risks being immediately interpreted and labeled as censorship.
In this sense, it is important to remember that the constitutional history of public authorities is marked by a long process of regulatory evolution aimed at establishing a system of regulation to prevent the recurrence of past abuses of power. This creates a collective perception of the State as an enemy, rather than an ally, in promoting individual freedom (Lakoff, 2014). Conversely, the private sector is often perceived as a key player in the liberal context, embodying the ultimate realization of individual freedom. The view of the State as the enemy of the individual is a narrative framework that can be easily activated by those seeking to undermine public trust in government institutions, particularly in the context of disinformation regulation, where destabilizing methods find their primary instruments of action. However, government entities cannot shy away from ensuring the well-being of a democratic society. Article 10 of the ECHR also states that freedom of expression may be subject to legitimate restrictions, provided they are prescribed by law and necessary in a democratic society.
Considering that a democratic society is founded on the full realization of individuals’ fundamental rights and that freedom of expression materializes in the pluralism of opinions, from which public opinion emerges and guides government policies, it is clear that there is an essential need to protect and regulate a digital context that has so far been insufficiently governed and frequently exploited as a powerful tool of manipulation. The European Union has recognized digital platforms as the main actors in the regulation of information manipulation due to the increased awareness of their power, technological and informational superiority, and global operation.
However, there are serious problems giving these platforms any kind of regulatory authority. The protection of fundamental rights can be jeopardized if private organizations are given the power to censor or filter information content without adequate transparency or appeal procedures. This concerns not only the right to freedom of expression, but also data protection and the accuracy of the information itself. The Venice Commission has drawn attention to the shortcomings of current policies, pointing out that relying solely on private organizations to protect fundamental rights may not be sufficient (Hassan & Pinelli, 2022). This concern relates to the issue of information modalities because, although platforms are not comparable to traditional publishers, they use algorithms that affect the appearance and accessibility of content. This reality known as the “algorithmic society” (Balkin, 2018) raises crucial questions about the actual protection of the right to be informed and the quality of the information itself. At the same time, public authorities’ intervention in regulating disinformation risks fueling fears of excessive control, evoking the specter of a “ministry of truth” (Magnani, 2021). Moreover, overly invasive state control could interfere not only with freedom of expression but also with private companies’ right to operate in a free market, protected by Article 16 of the Charter of Fundamental Rights of the European Union. More specifically, digital platforms benefit from the eCommerce Directive which establishes a form of exemption from liability for “content providers” (Bassini, 2019). However, it is important to remember that freedom of expression, often invoked as an argument against public authorities’ intervention in regulating online information, as well as the right to a free market, are dynamic rights. Their existence depends on their ability to adapt to the circumstances and changes in political and social contexts. It is within this delicate and complex balance that the European Union has embarked on a decade-long journey to combat disinformation while protecting the democratic society it represents.
Initiatives by Governmental Institutions and Private Stakeholders
The necessity for the European Union to adopt rigorous measures to fight disinformation began to emerge in 2014, after Crimea was invaded by the Russian Federation. Disinformation started to appear as a threat to the security of the European Union and, as such, was included in official documents as a “hybrid threat”. It is in fact in the Conclusions of the European Council of 2015, followed by a Communication from the European Commission (2016) the subsequent year, where we read for the first time: “Massive disinformation campaigns, using social media to control the political narrative or to radicalize, recruit and direct proxy actors can be vehicles for hybrid threats. (…) Perpetrators of hybrid threats can systematically spread disinformation, including through targeted social media campaigns, thereby seeking to radicalize individuals, destabilize society and control the political narrative”.
Framing disinformation as a tool of a “sharp power” strategy had already been declared by Russia itself (Giusti, 2023). In 2013 Russian General Valery Gerasimov stated in a speech, later reported in a Russian military magazine, that “the role of non-military means to achieve political and strategic goals has grown. In many cases, such means have surpassed the effectiveness of weapons”. This intervention suggested an overhaul of Russian strategies, oriented to exploit the opportunities offered by the Internet. This approach culminated in the creation of education and research dedicated to the analysis and use of information as a weapon, coordinated by the Federal Security Service. The outcoming strategy was later described by scholar Ben Nimmo through the categories of 4Ds: deny, distort, diminish, distract. (Singer & Brooking, 2018).
The growing awareness of information manipulation as part of an attack aimed at generating political imbalances in foreign states has led to the implementation of security-oriented initiatives, aimed at external agents. The first significant measure of the European Union in this context was the creation of the East StratCom Task Force, established not coincidentally within the European External Action Service (EEAS). This task force is responsible for developing strategic communication and intercepting and preventing pro-Kremlin disinformation, especially in the Eastern Partnership countries, which are particularly exposed to this threat (EEAS, 2019). The task force is composed of a group of multidisciplinary professionals, and one of the main tools created was EUvsDisinfo. 2 This tool represents the most direct and pragmatic response to the guidelines of the leaders of the European Union. EUvsDisinfo is the largest open-source database currently available on pro-Kremlin disinformation, containing over 15,000 examples of such procedures, collected from 2015 to date. The database allows the identification of recurring themes of disinformation, facilitating their recognition and providing study material to understand their modalities. Furthermore, EUvsDisinfo provides the public with media literacy tools, which are also open to the consumer, to independently recognize disinformation, basing its strategy not only on data collection and analysis, but also on information and media education. With the same purpose, on the website, we can find a section called LEARN, 3 which illustrates the key mechanisms of disinformation and offers resources and answers to acquire further skills. Over time, the East StratCom Task Force has increased in its relevance, consolidating its 2015 mandate, to the point that it is now part of the broader EEAS “Strategic Communication, Task Forces and Information Analysis” Division. This division helps the European Union to deal with the different aspects of Foreign Information Manipulation and Interference (FIMI), including disinformation, not only in neighboring countries but also globally (EEAS, 2022). A key element is the extension of the action range of this task force, which no longer focuses only on fighting the disinformation threat from Russia but has also extended its attention to that from China (EUvsDisinfo, 2023). The infodemic resulting from the COVID-19 pandemic has highlighted the evolving nature of the disinformation phenomenon, characterized by blurred and undefined outlines, capable of constant adaptation to new developments. Therefore, counter strategies must be flexible and ready to stay updated, to promptly address new emerging challenges.
Another tool developed within the EEAS in 2019, in preparation for the European elections, is the RAS. This system aims to rapidly connect all EU Member States, EU institutions, digital platforms, and fact-checkers, ensuring rapid and coordinated communication in response to threats of information manipulation. The RAS is an important testimony of the need for a multilateral approach, based on the exchange of information and coordinated responses between States, in order to effectively contrast the phenomenon.
The inclusion of disinformation among hybrid threats implies an emergency treatment of the issue, which allows the application of extraordinary procedures capable of compromising democratic principles. An example of this are the extraordinary measures adopted in relation to the war in Ukraine. In this context, the European Union has introduced restrictions against Sputnik and Russia Today, two of the main international Russian broadcasters, blocking their transmission on all platforms, including cable channels, satellite, internet, and mobile applications. However, it is essential to underline that these restrictions do not prevent these media outlets from continuing to carry out research or interview activities. In this too, the multi-actor approach has proven to be successful, involving not only European institutions, but also digital platforms. YouTube, Facebook and TikTok have obscured the social profiles associated with these Russian broadcasters. Apple and Microsoft have removed the possibility of downloading their applications, while Twitter warns users about links that lead to Russian sites. TikTok has suspended uploading videos from Russia and live streaming, and Google has stopped its advertising activities in Russia (Giusti, 2023). In this context it is certainly significant to underline how Ukraine has used communication channels to its advantage to create alliances and consolidate its position. Ukraine's approach to strategic communication is a further sign of how this has now become an essential factor in hybrid war operations, playing a decisive role in shaping narratives and influencing public opinion on a global level.
But if disinformation, understood as a form of “hybrid threat”, can give rise to extremely rigid countermeasures for a democratic system, how should we deal with all those unreliable news that do not appear as such? And still, when is a case of disinformation attributable to a hybrid warfare strategy and when is it not? In a communication from the European Commission (2018c) to other European institutions in December 2018, we read: “The actors behind disinformation may be internal, within Member States, or external, including state (or government sponsored) and non-state actors. (…) disinformation by the Russian Federation poses the greatest threat to the EU. It is systematic, well-resourced, and on a different scale to other countries. In terms of coordination, levels of targeting and strategic implications, Russia's disinformation constitutes part of a wider hybrid threat that uses a number of tools, levers, and also non-state actors”.
The document stipulates that disinformation, as a part of a hybrid warfare strategy, is designed with the intent of undermining the political balances of the targeted States and is managed and financed by state actors. However, disinformation, in its broadest sense, as stated in this communication and widely demonstrated during the COVID-19 emergency, can also be generated by individuals or non-state entities with other aims, for instance economic ones. In these cases, its harmful impact may not concern the political sphere but other aspects of society, such as health or environmental protection (European Court of Auditors, 2021). Therefore, when cases of information manipulation do not fall into the category of “hybrid threats”, what strategies does the European Union use? The European Union has tried to provide a defined answer to this question since 2018. Until then, in fact, disinformation had been treated exclusively as a foreign threat or had been partially addressed, recurring to regulations about issues that, although related, did not explicitly face it. In this regard, we can mention EU multistakeholder Code of Conduct on illegal online hate speeches (May 2016) and the Communication from the European Commission on addressing illegal content online (September 2017). 4 In fact, the Code of Practice on Disinformation was published in 2018 (European Commission, 2018b), the same year as the previously mentioned communication, the Action Plan Against Disinformation, which for the first time introduced the term “disinformation” in an official document of the European Union.
The Code of Practice on Disinformation marks a turning point for the European Union, starting to give responsibility to digital intermediaries that, until then, had mainly maintained a position of non-responsibility. The 2018 code, in fact, officializes the commitment voluntarily undertaken by the signatories (such as Google, Facebook, Microsoft etc., but also advertisers and other players in the technology sector) to actively contribute to the fight against and prevention of disinformation. Specifically, the code is articulated into 21 commitments divided into five main areas. The first, Scrutiny of ad placements, aims to monitor advertisements so that they do not support sites spreading false information. The second, Political advertising and issue-based advertising, aims to ensure transparency and regulation of political advertising. The third area, Empowering consumers, aims to promote greater media education, encouraging informed participation by consumers. The fourth, Integrity of services, focuses on closing fake accounts and removing automated bots. Finally, the fifth area, Empowering the research community, aims to create a multi-actor network to fight disinformation, involving the research community and a fact-checking network. Although the code was officially presented as a voluntary initiative of digital intermediaries, it is more correct to define it as a co-regulatory document rather than a self-regulatory one. This is because, although it is not the result of a binding directive of the European Union, it was clearly promoted by a strong institutional request. This request emerged after the events of 2016, when disinformation had a significant impact on fundamental democratic processes, such as the presidential elections in the United States, with the victory of Donald Trump, and the Brexit referendum in the United Kingdom (Giusti, 2023; Stocchetti, 2021).
After the two evaluation reports of the Code (carried out by the European Commission in 2019 and 2020) the Commission presented in 2021 a guide with the aim of providing indications to improve the Code itself. In particular, the guide highlighted the need for greater accountability from all signatories, the need to frame more structured and cohesive measures, overcoming the limitations of a voluntary approach, and the definition of more precise Key Performance Indicators, capable of ensuring effective monitoring (European Commission, 2021). To close loopholes in the previous code and take a more direct approach to disinformation, the EU paved the way for the signing of the Strengthened Code of Practice on Disinformation on 16 June 2022. Among the many differences between the two codes, we can highlight the aim to improve collaboration with fact-checkers and the academic community, as well as the incorporation of cutting-edge technologies such as artificial intelligence (AI) to tackle emerging issues. Furthermore, the code highlights the need for more rigorous transparency and reports. However, the most significant change of the code compared to the previous one is that it is supported by the DSA, the first regulatory document of the European Union related to disinformation. The combination of the Code and the DSA allows to establish a balance between obligations and flexible operating methods, capable of adapting to the differences of individual platforms and complying with the e-commerce directive (Hassan & Pinelli, 2022). The DSA takes an “asymmetric” approach, imposing more rigorous obligations exclusively to platforms with particularly large audiences carrying a greater risk of social and economic influence. The DSA aims to protect users of such platforms, emphasizing the need for transparency in actions against disinformation by Very Large Online Platforms (VLOPS). Equally, it ensures supranational regulatory uniformity, addressing the problem of fragmented national initiatives (Stocchetti, 2021) that struggle to respond to global challenges. Another significant aspect concerns the possibility for competent authorities to access data, for example in the context of investigations into specific behaviors in exceptional cases. Although the DSA may initially appear limited in terms of strict obligations, allowing considerable leeway to platforms, this can be interpreted as a strength in an asymmetric relationship, since VLOPs operate on a much higher level with regard to information and technology, creating a balance between freedom of expression and the need for regulation. In an interview with Zorloni (Sept 12, 2024), Roberto Viola, head of the General Directorate Connect of the European Commission, explained that “all regulations have a processing cycle that does not last only a few months. It takes at least a couple of years to activate them”. In the case of the DSA, “a fundamental element that is not yet fully in force is the audit algorithm”, that is an external control of the platforms’ algorithms, essential to understand how the data is used. Viola added: “Now we are entering this phase; the DSA involves an audit of the algorithmic recommendation systems by an independent company”.
Viola's observations clearly suggest that a radical change process takes time, and only through constant efforts and monitoring it is possible to evaluate its actual effectiveness and identify the weaknesses. However, the DSA has been tested during this year's European parliamentary elections, where no systemic attack was recorded, and where the Commission did not have to activate any emergency procedures (Zorloni, 2024). According to the European Digital Media Observatory (EDMO), this may be due to the rapid identification of threats and greater attention to the phenomenon of disinformation. In this first year, therefore, the DSA seems to have initiated a significant change, also highlighted by the processes implemented by digital platforms. Although some of them had already undertaken paths of transparency and control, as in the case of the Facebook Oversight Board, 5 the DSA has made transparency reports and anti-disinformation actions more systematic; moreover, it has strengthened collaboration with fact-checking networks, has implemented reporting systems for users, increased the visibility of reliable news and created media literacy tools. 6
Despite the ambitious ongoing project, it is essential to underline that changes in the digital age occur at a significantly faster pace than the attempts to regulate them. An example is the recent European Union law on artificial intelligence (AI Act), which expects a two-year implementation period (European Commission, 2024). The significant progresses made by the European Union aim to address existing problems, but it is right to question the possibility of preventive approaches. To this end, it is necessary to turn our attention to the third actor in the field: the citizen.
The Role of Citizens
Focusing on the initiatives seen so far, the strategies realized at European level are aimed at contrasting a phenomenon that exists. The European Union, like other actors in this field, intervenes ex post to define ways to combat the phenomenon. This means that the entities involved are always one step behind emerging threats (e.g., deepfakes) helped by the growing diffusion of platforms of AI generated content. The tools that can be used to manipulate information have alarmed the institutions responding with the AI Act. Individual citizens here come to play a significant role as “users”. Disinformation is directed to individuals, and they need protection.
The importance of this “third actor” has been highlighted in EU documents from the very beginning. Specifically, it is present in the 2018 report of the High-Level Expert Group (HLEG), which is the starting point for the current discourse. Among the recommendations of the HLEG, in fact, we find the following essential passage: “The multi-dimensional approach recommended by the HLEG is based on a number of interconnected and mutually reinforcing responses. These responses rest on five pillars designed to: (…) 2. promote media and information literacy to counter disinformation and help users navigate the digital media environment; (…) Additional measures aimed at strengthening societal resilience in the longer term need to be implemented in parallel. Therefore, the HLEG recommends a set of complementary measures. These measures are designed to support the diversity and sustainability of the news media ecosystem on the one hand. On the other hand, they are designed to develop appropriate initiatives in the field of media and information literacy to foster a critical approach and a responsible behavior across all European citizens” (European Commission, Directorate-General for Communication Networks, Content and Technology, 2018).
Hence, the HLEG underlines the importance of developing control tools for users and the urgence to start initiatives related to media and information literacy. This recommendation is repeated in various European Union documents aimed at countering information manipulation, and it is particularly supported by the EDMO. 7 In particular, the EDMO creates teaching materials and educational resources, organizes seminars and training courses open to different groups of the population, and starts awareness campaigns to increase the understanding of the risks associated with disinformation. Among its missions, EDMO aims to train citizens to be aware and capable of recognizing disinformation mechanisms, with the support of a multidisciplinary network composed of fact-checkers, media literacy experts, and academic researchers in collaboration with media organizations, online platforms, and media literacy practitioners.
Still to protect the users and actively involve them in the fight against disinformation, the dissemination of harmful information emerging during the COVID-19 emergency marked a significant progress in the creation of projects dedicated to verifying the truthfulness of news. With the foundation of the International Fact-Checking Network (IFCN), we also witnessed the birth and consolidation of media and specialized platforms, including Pagella Politica, Full Fact, and Snopes (Casero-Ripollés et al., 2023), as well as tools such as ClaimBuster, Hoaxy and the most recent InVID, which deals with the authenticity of videos. 8 Another open tool, although used mostly for research purposes, was CrowdTangle, an important resource for the fight against disinformation, which however was closed last August. 9 This platform has been replaced by the Content Library and API, now only available for researchers and non-profit organizations upon request. 10
However, the tools presented above, together with the sections dedicated to media literacy presuppose the presence of an active and rational citizen, not only aware of disinformation, but also committed to its resolution. However, this ideal of citizen seems to be far from reality. The typical individual we are dealing with is a person who makes quick decisions, mainly guided by emotions rather than critical thinking, and who is accustomed to short and instant news. This tendency makes the individual exposed to information manipulation, which exploits emotional levers and polarizing logic, amplified by the algorithms of social networks (Quattrociocchi & Vicini, 2023; Rubinelli et al., 2020). It is significant to notice that, within the transparency initiatives of major digital platforms such as Meta, access to the algorithms that determine the selection of news continues to be limited. However, it is now certain that these algorithms exploit the users’ confirmation bias, showing them content aligned with their value system, thus reinforcing it (Quattrociocchi & Vicini, 2023; Sunstein, 2017). This process stimulates a form of gratification that keeps the users in their comfort zone, reducing their tendency to seek information that does not confirm their beliefs. Although we tend not to compare algorithmic selection to an editorial decision, this dynamic makes choices instead of the user, making them basically a passive subject in the deciding on information.
The structure of the usage of digital platforms must be placed at the center of the European debate on policies against disinformation, since the main form of prevention against this, and other threats to democratic systems is undoubtedly represented by a free and informed individual, a condition that the functioning of platforms seems to hinder. As highlighted by the study by Nyhan and Reifler (2010) tagging fake news with fact-checking tools or, in some cases, with debunking, does not automatically eliminate the cognitive individual perception of whether the news is true. Algorithmic logics tend to hit a target predisposed to believe in such news, and disconfirmation could cause a “backfire effect” rather than solve the problem, reaffirming one's position and even discrediting the debunking work. 11 This makes us understand that the problem of disinformation is much more complex and systemic than a mere distinction between “foreign threat” and not, or between true and false, and must be deconstructed taking into consideration its complexity. Although counter measures can more easily detect foreign disinformation campaigns, automated content, or visual manipulations, it is more difficult to identify false information when protected by freedom of expression comprising a variety of “truths”. The polarizing reality of digital platforms, together with other systemic factors, has fueled a “market of truth” (Nicita, 2021) rather than ideas, which no longer favors an open and reciprocal debate based on shared and shareable facts, but instead promotes a conflict between “us” and “them”.
The effort to guarantee free expression and free market seems to have obscured the importance of preserving the right to receive and seek information. The mainstream media have undergone significant transformations in recent years, influenced by the crisis caused by the emergence of the Internet. Adapting to a demand for extreme and adversarial content, these platforms have often become amplifiers of unreliable news, capable of attracting the public's attention (Stocchetti, 2021). In this context, it is crucial to observe how the threat to plural information does not come from direct censorship, but rather from its exact opposite (Nicita, 2021). In this sense, freedom of expression becomes its own enemy. In fact, freedom of thought, essential to fully exercise the right to freedom of speech, is hampered by the difficulty of accessing reliable and impartial information, the basis of free and critical thinking.
Considering what has been said so far, it is necessary to underline how enlightening individuals represents the best tool to confront a constantly changing threat. In order to promote the ability to make informed everyday choices in the digital world, it essential to reflect on media training that does not only consider threats, but also the dynamics that make the user passive in contexts where he or she thinks of being a protagonist. Only citizens can change the way of the tide.
Conclusion
The challenges democratic institutions face in a digitally connected world are the reasons behind the EU's continuous struggle against disinformation. Its initiatives to establish a proactive and adaptable regulatory balance, such the DSA and the revised Code of Conduct on Disinformation, represent significant advancements. These regulatory initiatives signal a shift towards shared responsibility by imposing obligations on both digital platforms and governments. Digital platforms are urged to actively support user protection, public transparency and coordinated actions to combat disinformation. However, the EU must remain agile in its efforts due to the rapidly changing digital landscape, particularly the emergence of AI-generated disinformation.
Nevertheless, empowering the citizens remains a key component of the EU's anti-disinformation campaign. The importance of teaching people to evaluate information critically is recognized by the EDMO and other media literacy initiatives. These initiatives highlight the need for citizens who can critically engage with information and identify deceptive practices to create a truly resilient democratic society. In addition, the EU aims to address the problem of disinformation not only at an institutional level, but at the very basis of public perception and engagement, by increasing openness and access to information verification technologies.
Looking forward, the EU needs to continuously improve its strategy to remain flexible and responsive to new threats such as deepfakes and AI-driven disinformation. Preserving democratic ideals in the digital age requires cooperation between government institutions, digital platforms, and private citizens. While the EU's programs provide a model, their success ultimately depends on cultivating a population that values transparency, communication, and shared responsibility. The EU's commitment to a comprehensive and balanced approach will be essential to maintain the integrity of its democratic institutions and ensuring the well-being of its citizens in a world where the nature of information itself is constantly changing.
Footnotes
Funding
This research was founded by the European Commission under the Project AI4Debunk, grant number 101135757.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
