TolosanaR, Vera-RodriguezR, FierrezJ, et al.Deepfakes and beyond: a survey of face manipulation and fake detection. Information Fusion, 2020; 64:131–148.
4.
AhmedS. Who inadvertently shares deepfakes? Analyzing the role of political interest, cognitive ability, and social network size. Telematics and Informatics, 2021; 57:101508.
5.
DoffmanZ. Chinese deepfake app ZAO goes viral, privacy of millions ‘at risk’. Forbes Magazine. 2019. https://www.forbes.com/sites/zakdoffman/2019/09/02/chinese-best-ever-deepfake-app-zao-sparks-huge-faceapp-like-privacy-storm/?sh=2951ebb88470 (accessed Jan.27, 2021).
6.
GarryM, WadeKA. Actually, a picture is worth less than 45 words: narratives produce more false memories than photographs do. Psychonomic Bulletin & Review, 2005; 12:359–366.
7.
SegoviaKY, BailensonJN. Virtually true: children's acquisition of false memories in virtual reality. Media Psychology, 2009; 12:371–393.
8.
FoxJ, BailensonJN. Virtual self-modeling: the effects of vicarious reinforcement and identification on exercise behaviors. Media Psychology, 2009; 12:1–25.
9.
AhnSJ, BailensonJ. Self-endorsed advertisements: when the self persuades the self. Journal of Marketing Theory and Practice, 2014; 22:135–136.
10.
LevineTR. (2019) Duped: truth-default theory and the social science of lying and deception. Tuscaloosa, AL: University Alabama Press.
11.
Bond JrCF, DePauloBM. Accuracy of deception judgments. Personality and Social Psychology Review, 2006; 10:214–234.
12.
HancockJT, WoodworthMT, GoorhaS. See no evil: the effect of communication medium and motivation on deception detection. Group Decision Negotiation, 2010; 19:327–343.
13.
PosnerM, NissenM, KleinR. Visual dominance: an information-processing account of its origins and significance. Psychological Review, 1976; 83:157–171.
14.
KoppenC, SpenceC. Seeing the light: exploring the Colavita visual dominance effect. Experimental Brain Research, 2007; 180:737–754.
15.
GraberDA. Seeing is remembering: how visuals contribute to learning from television news. Journal of Communication, 1990; 40:134–155.
16.
PriorM. Visual political knowledge: a different road to competence?. Journal of Politics, 2013; 76:41–57.
17.
SundarS. (2008). The MAIN model: a heuristic approach to understanding technology effects on credibility. In MetzgerM, FlanaginA, eds. Digital media, youth, and credibility. Cambridge, MA: MIT Press, pp. 73–100.
18.
FallisD. The epistemic threat of deepfakes. Philosophy & Technology, 2020; 1–21.
19.
VaccariC, ChadwickA. Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media and Society, 2020; 6:1–13.
20.
DobberT, MetouiN, TrillingD, et al.Do (microtargeted) deepfakes have real effects on political attitudes?. The International Journal of Press/Politics. 2019; 26:69–91.
21.
ReevesB, NassC. (1996). The media equation: how people treat computers, television, and new media like real people. Cambridge, United Kingdom: Cambridge University Press.
22.
KimSJ, HancockJT. How advertorials deactivate advertising schema: MTurk-based experiments to examine persuasion tactics and outcomes in health advertisements. Communication Research, 2017; 44:1019–1045.
23.
LivN, GreenbaumD. Deepfakes and memory malleability: false memories in the service of fake news. AJOB Neuroscience, 2020; 11:96–104.
24.
OhSY, BailensonJ, KrämerN, et al.Let the avatar brighten your smile: effects of enhancing facial expressions in virtual environments. PLoS One, 2016; 11:e0161794.
25.
LeongJS. Investigating the use of synthetic media and real-time virtual camera filters for supporting communication and creativity. Unpublished master's thesis, Massachusetts Institute of Technology, 2021.
26.
HancockJT, NaamanM, LevyK. AI-mediated communication: definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 2020; 25:89–100.