Abstract
This article examines the legal, regulatory and societal challenges posed by deepfake technology, situating its analysis within a comparative framework spanning the European Union, United States and China. It explores the multifaceted harms of deepfakes – from non-consensual pornography and political disinformation to financial fraud and identity manipulation – and analyses their rapid dissemination through online ecosystems that undermine both individual dignity and democratic trust. The study assesses the EU Artificial Intelligence Act, highlighting its transparency-based approach, definitional boundaries and classification of deepfake systems as ‘limited-risk’, while identifying contexts that may warrant high-risk or prohibited status. It underscores the limitations of transparency obligations in addressing malicious actors, cross-border disinformation and intimate image abuse, and examines the complementary roles of the Digital Services Act, the General Data Protection Regulation and the EU Directive on combating violence against women in regulating different stages of the deepfake lifecycle. Ultimately, the article argues for a multi-layered, adaptive governance model that reconciles the protection of rights, dignity and democratic integrity with the preservation of legitimate innovation in AI-driven creativity and communication.
Keywords
Get full access to this article
View all access options for this article.
