Abstract
Very large online platforms shape public discourse, raising concerns about the impact of misinformation and disinformation on democratic stability. The European Union responded with the Digital Services Act framework. However, its enforcement remains limited, partly due to conceptual ambiguities surrounding the concepts of misinformation and disinformation. This article investigates how definitional inconsistencies undermine the Digital Services Act’s (DSA) implementation. We analyzed 79 documents—including EU legislation, platform policies, fact-checking codes, and academic publications—to examine how key actors define these terms. We identified four key definitional criteria: content quality, intent, associated risk, and creation and dissemination techniques. Our findings reveal that these criteria reflect divergent institutional interests and result in fragmented definitions. This fragmentation generates critical risks, affecting freedom of expression, research, countermeasure design, intervention effectiveness, and impact assessment. We conclude by offering recommendations to support criteria-based definitions, improve risk evaluation, and reinforce the DSA's effectiveness in countering information disorders.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
