Abstract
This paper interrogates the challenges that deepfake media and AI-generated synthetic content pose for educational policy, governance, and the evolving notion of classroom safety. Moving beyond traditional paradigms that center student welfare and physical security, the analysis foregrounds the neglected psychosocial risks facing teachers, whose occupational safety is increasingly threatened by gendered forms of digitally mediated violence. Drawing on Feenberg’s Critical Theory of Technology and Manne’s analysis of misogyny, I situate deepfake harms within broader structural inequalities and neoliberal logics that commodify both teacher and student identities. Using Rahm’s concept of “educational imaginaries,” the paper presents two vignettes. The first shows how teachers face new OHS risks through real-time, technology-enabled abuse. The second shows how marketing uses of student images create additional vectors for consent violation and institutional risk. The analysis critiques current governance frameworks and calls for intersectional, anticipatory policies that address both relational and systemic dimensions of AI-facilitated harm. Ultimately, the paper calls for a reimagining of classroom safety that embeds gender justice, psychosocial well-being, and collective accountability at the center of AI governance in education, for education.
Keywords
Introduction
A crucial commitment of the United Nations Sustainable Development goals (SDGs) is to leave no one behind, and in that spirit, let us ensure that we offer truly safe and inclusive educational environments to all learners, so that they may access equal opportunities and achieve great success. -Eric Falt, UNESCO New Delhi Director (Falt, n.d.)
As deepfakes become pervasive in education, policy must examine synthetic media and its harms through the lens of safety. Recent advances in the accessibility, speed, and simplicity of generating synthetic images and video have fundamentally altered perceptions of classroom safety, with deepfakes presenting new and significant risks to educational environments. Garran and Rasmussen (2014, p. 13) argue that “classroom safety is not a singular concept, but rather a differentially experienced phenomenon that is related to questions embedded in power and privilege.” As such, when we refer to classroom safety, we are not solely addressing protection from physical violence but also safeguarding against psychological and emotional vulnerability.
Vulnerability, as a result of generated image and video, includes exposure to “deepfakes” that Kikerpill et al. (2021, p. 20) describe as “the replacement of one person’s image in existing – often pornographic – media content with the likeness of another.” Deepfake technology is a specific application of Generative AI (GenAI) that involves using deep learning algorithms, typically based on neural networks, to create highly convincing fake media content, such as images, videos, or audio recordings, often referred to as synthetic media (Baidoo-Anu and Ansah, 2023). Deepfakes are a specific subset of synthetic media, a broader term that encompasses various types of artificially generated content (Whittaker et al., 2020). While there is a general consensus on the importance of ensuring classroom safety, there is a notable lack of understanding regarding how staff and students understand “classroom safety” associated with deepfakes.
Much of the discourse is situated within marketing, communications, and policing measures beyond educational systems, yet reporting of cyberbullying in schools involving deepfakes is increasingly evident (Australian eSafety Commissioner, 2022). This discussion paper offers a novel means to interrogate deepfakes in terms of safe classrooms, through a critical feminist lens.
I build on the conversation offered by Kardos (2025), who argues that “AI-generated pornography is often problematized on the basis of its non-consensual nature” (p. (2) within UN discourse. Noting that the main harm is framed as “women are portrayed in non-consensual scenarios” (p. 9), Kardos (2025) narrows this focus stating that it “fails to connect AI-generated pornography to the issues of pornography in the first place, which gives a depoliticized and dehistoricized discourse about the problems of AI-generated pornography” (p. 1). The article critiques that, although there is a proliferation of AI-generated pornography as an important scope of the work, it is noted that educational documents “more or less, overlook the issue of AI-generated pornography” (p. 7). Kardos contends that this “keeps this issue isolated and renders it as a ‘women’s issue’, removed from the general discourse” (p. 8). In sum, Kardos (2025) calls for education policy and discourse to move beyond the consent paradigm, insisting that “the harms of AI-generated porn must be connected to the harms of pornography in the first place” (p.14). This paper responds to this call, by situating deepfake pornography of teachers within the context of what is considered to be a safe classroom.
I begin by acknowledging the limited understanding of teachers’ specific intentions when prioritizing safety within the physical classroom (Barrett, 2010; Garran and Rasmussen, 2014). That is, when considering the less tangible risks to safety posed by deepfakes, the discussion often centers on physical and mental safety, and, more recently, data safety. However, as Pangrazio and Selwyn (2023) observe, “approaches [to data safety] focus primarily on personal data that people have voluntarily uploaded to devices and platforms, rather than examining the broader ecologies of data and the data broker economy.” Further, that the notion of “data safety” typically stems from approaches to cyber safety and has largely been directed toward protecting children and young people. Yet, as the classroom is the teachers’ workplace, having autonomy and control over deepfakes becomes an integral aspect of a broader economy of risk management, posing intangible risks to teachers’ workplaces (Arantes, 2022). Third, as raised by Kardos (2025), across OECD countries, women comprise an average of 70% of teachers across all levels of education combined and deepfake pornography has been conceptualized as a “women’s issue.” In OECD countries, where on average, women represent 97% of the teaching staff at pre-primary level, 83% at primary level, 60% at upper secondary, and only 44% at tertiary level (OECD, 2019)—arguably deepfake pornography in the classroom is in fact a women’s issue. But it is a workplace issue of gendered violence being perpetuated against staff while under the duty of care of their employer. Therefore, fostering safe classrooms necessitates not only taking into account the notions presented by Garran and Rasmussen (2014) around physical harm and cultural competence but also acknowledging the rights established through online content moderation and gendered crime prevention, as discussed by Kikerpill et al. (2021).
What follows is a critical interrogation of the intersection of deepfake media, gendered harm, and AI governance in education. Drawing on critical feminist theory, the paper begins by positioning deepfake technology not just as a technical or cyber-safety issue but as a profound threat to the psychological, reputational, and professional safety of teachers and students, especially women and those with marginalized identities. It then synthesizes Feenberg’s Critical Theory of Technology with Manne’s analysis of misogyny, to analyze and situate the harms of synthetic media within the broader context of power, privilege, and workplace safety in education. What follows is a genealogical overview of AI and synthetic media, followed by the development of two educational imaginaries that concretize the distinct risks posed to students and teachers, respectively. The paper concludes by engaging with the Australian eSafety Commissioner’s Safety by Design principles, advocating for the adoption of inclusive, intersectional, and anticipatory governance approaches.
Background: Deepfake technology not just as a technical or cyber-safety issue
Kikerpill et al. (2021) analyze Reddit’s ban on deepfake pornographic content, suggesting that situational crime prevention techniques should inform more effective online content moderation. Kikerpill et al. (2021) further argue that all crimes in technology-mediated environments are fundamentally acts of communication, emphasizing that prevention and policy must address this communicative core of cybercrime. In the context of education, this insight reveals that traditional concepts of guardianship are largely irrelevant. This leaves individual teachers reliant on their own knowledge and digital literacy to stay safe, an approach that is ultimately insufficient. Shih and Wang (2021) support this view by concluding that integrating gender issues into general education is essential for creating safe, equitable, and harmonious educational contexts. Further, Estellés et al. (2023) critically examine how UNESCO and New Zealand’s Ministry of Education construct “safety” in education policy as a neoliberal discourse, showing that it individualizes responsibility for student well-being, primarily onto teachers and students, while obscuring broader systemic causes of risk and managing student behavior and social risk under the guise of altruism. Therefore, as GenAI intensifies the gendered risks of pornographic deepfakes, responsibility cannot rest solely with teachers (Kikerpill et al. (2021); Kardos, 2025; Estellés et al., 2023). Rather, systemic inclusion of gender issues within education when deepfakes are so readily available is required (Shih and Wang, 2021). By doing so, we may address these emerging harms, to present a “safe classroom” through a lens of topics such as deceptive manipulation of digital content.
The potential for deceptive manipulation of digital content has been discussed in terms of misinformation and identity theft (Westerlund, 2019). Its impacts on politics and advertising (Andrejevic et al., 2021) are well established, with one widely known example of a deepfake featuring former U.S. President Barack Obama (Hancock & Bailenson, 2021). In this instance, AI and deep learning algorithms were employed to synthesize Obama’s facial expressions and lip movements, seamlessly integrating them onto the body of an actor delivering a speech. The result was a convincing yet entirely fabricated video that appeared to show President Obama saying and doing things he never actually did. Move forward and we now have the Australian eSafety Commissioner (2022) releasing warnings for teachers about real time deepfake pornography.
Deepfakes utilize artificial intelligence (AI) and autonomous systems to substitute or overlay existing elements in media files with deceptive counterparts. According to Floridi (2023, p. 1), AI is defined as “an engineered system that can, for a given set of human-defined objectives, generate outputs – such as content, predictions, recommendations, or decisions – learn from historical data, improve its own behaviour, and influence people and environments.” Second, the term “deep” in deepfake is derived from “deep learning,” a subset of machine learning utilizing neural networks with multiple layers for intricate pattern analysis. These algorithmic processes allow for the creation of highly realistic and sophisticated synthetic media, such as observed in the Obama deepfake. I argue that the emergence of deepfakes has triggered a paradigm shift in our understanding of classroom safety for teachers. In the context of education, impacts of deepfakes on teachers (Arantes, 2022) remain in the shadows, overrun by policy change associated with academic integrity issues (Cotton et al., 2023). However, the compromise to a safe teaching and learning environment due to deceptive manipulation of student and staff images, videos, or audio recordings using computer-generated means is resulting in harm which needs greater debate and discourse in policy circles. According to Rahm (2023, p. 46), the “visions, policies, and projects of educating citizens” can be considered through educational imaginaries as a means to “problematize, negotiate and ultimately govern citizens and citizenship at the intersection between technology and education.” What follows is a brief genealogy of deepfakes, leading into two educational imaginaries that problematize the notion of a “safe classroom” and illuminate the various ways deepfakes are implicated in harm.
This is significant, as post-ChatGPT policy responses have concentrated on academic integrity and assessment redesign. This paper argues that this focus leaves a parallel risk of synthetic images, audio, and video underexamined in schooling contexts. GenAI is not only text. And by focusing on academic integrity and assessment reform only, we arguably enable the rapid fabrication of sexualized and defamatory media that directly affects teachers’ conditions of work. In education, where women comprise the clear majority of the teaching workforce across levels, these harms implicate occupational health and safety, anti-discrimination duties, and the employer’s responsibility to provide a safe workplace. By contrast, the growing literature on cheating and assessment reform does little to address image governance, psychosocial protection, or incident response for synthetic media. Deepfakes therefore represent not only a threat to students but also a profound reconfiguration of teachers’ workplace rights, demanding governance that treats synthetic-media abuse as a workplace risk as much as a student-safety issue.
Genealogies of deception: Tracing the evolution of AI, synthetic media, and deepfakes in education
The history of AI arguably begins with concepts related to artificial beings with human-like qualities that can be traced back to ancient mythology and folklore. Tales of automata and mechanical beings fascinated cultures around the world (Franchi and Güzeldere, 2005), until a formalization of the discipline in the mid-20th century with the coining of the term at the Dartmouth Conference in 1956. The groundwork for AI was laid with the advent of electronic computers in the 1930s and 1940s, where mathematicians such as Alan Turing explored the theoretical basis of computation (Franchi and Güzeldere, 2005). The development of early AI programs produced unmet expectations and technological limitations. This resulted in an AI winter during the 1970s and 1980s (Buchanan, 2005). There was a resurgence of interest in the 2000s, fueled by advances in computing power and machine learning algorithms. Thus, informing the current landscape where AI as a mainstream entity in various domains (Perrotta and Selwyn, 2020), from speech recognition to autonomous systems. Ongoing research addresses ethical considerations and improving the explainability of AI models (Birhane and Van Dijk, 2020) has occurred, with the term synthetic media emerging to broadly encompass computer-generated content, including images, videos, audio, and text, produced through sophisticated algorithms (Whittaker et al., 2020). Deepfakes are a specific manifestation of synthetic media that are designed to be deceptively realistic (Westerlund, 2019), leading critical edtech scholars to raise concerns regarding the potential misuse for malicious purposes (Hancock & Bailenson, 2021) including deepfake porn (Kikerpill et al., 2021), which brings us to today, where instances of harm are increasingly prevalent.
Instances of harm are evident in Australia, where deepfake pornography has been created and circulated using images of school students and teachers sourced from publicly available marketing materials (Lavelle, 2023), and in the US, where a 14-year-old was among 30 female students whose photographs were digitally manipulated and shared among peers (Nover, 2023). Research further demonstrates that when deepfakes are weaponized, women are disproportionately targeted (Compton and Hamlyn, 2023). Given that women make up between 60% and 97% of teaching staff across OECD countries, I draw on Kardos’ argument that framing deepfake pornography solely as a “women’s issue,” while accurate, serves to isolate the problem and remove it from broader policy discourse (Kardos, 2025: p. 8). By situating deepfake pornography targeting teachers within the context of classroom safety, this paper identifies a critical gap in current guidelines. The discussion that follows examines the impact of deepfakes on both students and teachers through the theoretical perspectives of Feenberg’s (1991) Critical Theory of Technology and Manne’s (2017) Logic of Misogyny.
Reimagining safety through theory: Deepfakes, policy, and the politics of safety in the classroom
The policy push for teachers to embrace digital technologies has been a global imperative for decades (Rahm, 2023; Williamson, 2019), and acquiring digital competencies is now considered essential skills for students to possess and teachers to enable (MacKenzie et al., 2022). With the rise of GenAI, the emerging digital landscape offers immense benefits, but it is also considered to pose immense risks (Ekstedt et al., 2023). From threats to democracy (Thompson, 2021), online harassment (Thompson, 2022), filter bubbles (Pariser, 2011), fake news, and misinformation, GenAI is increasingly being shown to be directly associated to harm (Tlili et al., 2022). The mandated use of digital technologies, along with the encouragement for students to use GenAI, brings about the introduction of novel forms of harm.
With much of the GenAI now ubiquitously governing educational systems through normalized technological materiality (Feenberg, 2019), the intersection of GenAI in the form of deepfakes and associated harm involves both discursive and material dimensions (Wetherell, 2013). That is, the non-neutral nature of GenAI perpetuates biases based on dominant ideologies allowing sexist and other forms of indirect discrimination to become woven into the mundane activities that inform educational systems (Arantes & Vicars, 2023). With the concealed functionality of GenAI enacting material asymmetries (Arantes, 2022), the risks to safety are often obscured. As contemporary policy pushes for continuous 21st century skill development to navigate regulatory challenges associated with the harms of GenAI, it is the classroom that is currently the primary avenue to address these potential threats (Qadir, 2022). The continuous battle over digitalization objectives is now increasingly seen through the governance perspective, specifically emphasizing the need to establish safe classrooms in the presence of AI. To interrogate the implications of deepfakes in education, I now problematize them in education by drawing on Feenberg (1991) and Manne (2017) to reconsider our understanding of classroom safety. Feenberg’s Critical Theory of Technology (1991) provides a framework for understanding and evaluating the relationship between technology and society.
Feenberg’s theory is grounded in the broader tradition of critical theory, which aims to examine and critique social structures and practices to uncover hidden power dynamics and promote social transformation (Feenberg, 1991). Feenberg’s Critical Theory of Technology distinguishes between substantive and formal rationality leading us to question whether deepfakes promote or demote democratic, ethical, and humanistic values (Feenberg, 1991). Second, Feenberg argues that technologies are not neutral tools but are shaped by social, political, and cultural forces and introduces the concept of technical codes to challenge the implicit rules and assumptions embedded in technological designs (Feenberg, 2003). Codes that govern the use of deepfakes and determine who benefits from them are crucial for promoting a more democratic and inclusive technological landscape in education. These concerns resonate with recent philosophical engagements on AI in education that broaden the ethical and epistemic stakes (Jackson et al., 2025; Peters et al., 2023).
Building on Feenberg’s focus on the social shaping of technology, recent philosophical engagements with AI in education have further deepened the ethical and epistemic stakes. Jackson et al. (2025) explore how artificial intelligence reshapes notions of voice and authority in educational contexts, questioning whose perspectives are amplified, whose are silenced, and how epistemic credibility is redistributed when AI-generated outputs circulate as if they were human. Peters et al. (2023), meanwhile, situate GenAI within broader debates about humanity’s future, autonomy, and democratic participation. Peters et al. (2023) highlight the existential and ontological dimensions of AI-mediated education. Together, this thinking expands the philosophical foundation of this paper by demonstrating that the harms of deepfakes are not only technical or regulatory but also epistemic and ontological. And in doing so, we think about “who, why, how, and what” is shaping how education defines knowledge, authority, and safety when GenAI is present.
In Manne’s 2017 book, “Down Girl: The Logic of Misogyny,” we see a philosophical analysis of misogyny. Misogyny is not presented as a pathological hatred of men, rather Manne leads us to understand it as a social and moral phenomenon that polices women through gendered norms and expectations (Manne, 2017). This resonates with Jackson et al. (2025, p. 654) who observe that “The manliness of ChatGPT need not be confined to its mansplaining as a service (Harrison Dupré, 2023). For a start, it is a mere technological infant, not a man.” Their point underscores how gendered assumptions and discursive framings are projected onto AI systems, reinforcing the cultural scripts that Manne identifies as mechanisms of policing women’s behavior. Misogyny is discussed as the mechanism to police, women’s behavior within a patriarchal society, and Manne helps us to understand how societal structures and norms contribute to the oppression of women. As Manne (2017, p.173) states, “always somebody’s someone, and seldom her own person. But this is not because she’s not held to be a person at all, but rather because her personhood is held to be owed to others, in the form of service labour, love, and loyalty.” The women’s face was from a school brochure. “It” was on another women’s body, cut and paste with the simplicity of completing a puzzle. This points towards the ways certain linguistic constructs and narratives contribute to victim-blaming and the dismissal of women’s experiences, where the image is not “real.” Manne elucidates the simplicity of policing bodies through such acts while acknowledging the intersections of oppression, such as racism and classism. Through Manne and Feenberg’s work, we question and challenge the notion of classroom safety, when digitalization is a foundational policy push for innovation in educational systems.
What follows are two educational imaginaries that draw together these theoretical insights and policy tensions to reimagine what constitutes a “safe classroom” in the era of deepfakes. The first imaginary centers on students and the risks to their privacy, consent, and well-being as their images circulate in digital school narratives; the second turns to teachers, whose professional and personal safety is increasingly threatened by gendered deepfake harm in their workplace. By presenting these imaginaries, I aim to illuminate how critical feminist and sociotechnical theory can help us see beyond conventional approaches to classroom safety, foregrounding the need for new governance strategies that address both the technological and gendered dimensions of harm in contemporary education.
Method: Educational imaginaries
In this paper, I employ Rahm’s (2023) concept of “educational imaginaries” as a qualitative methodological tool to interrogate the governance of deepfakes in education through a critical feminist-technology lens (Feenberg, 1991; Manne, 2017). Educational imaginaries are understood as the shared visions, narratives, and assumptions that shape how education is collectively imagined and governed (Williamson, 2023). In this instance, I focus on the response to technological change. Drawing on data derived from scholarly and policy literature, I constructed two vignettes centered on the collective actors of teachers and students, using these narratives to problematize conventional understandings of “classroom safety” in the context of deepfake risks. This approach aims to provoke critical reflection and provide policymakers and educational leaders with a fresh perspective on the emergent harms and governance challenges now present in educational environments.
Data sources and vignette construction
The two imaginaries were developed from a synthesis of peer-reviewed research, policy documents, regulatory reports, and documented incidents reported in media. Inclusion criteria emphasized education-specific contexts, recent cases (2022–2024), and sources that provided descriptions of image governance, psychosocial harm, or regulatory response. The process involved iterative reading and re-reading of these materials and then tracing recurring patterns of risk and governance that informed the construction of the two imaginaries: first, centered on teachers’ workplace rights and the second on student image governance. Educational imaginaries are not intended to serve as empirical case studies; rather, they function as theoretically informed, plausible scenarios that expose how narratives of “safety” are mobilized to govern practice. Their purpose is to surface risks that policy discourse often obscures (Rahm, 2023) and to highlight emerging tensions in governance. By bringing together insights from peer-reviewed literature, documented cases, policy reports, and regulatory guidance, the imaginaries offer structured accounts that reveal the limits of current governance frameworks. The intent is not to evidence past events but to provoke anticipatory reflection and consideration of alternative futures.
Results
The following results are presented through two educational imaginaries, each illustrating the risks and policy gaps surrounding deepfakes in the classroom for both teachers and students.
Vignette 1. The classroom is the teachers’ workplace: Deepfakes and OHS
The classroom is the teacher’s workplace, and occupational health and safety legislation applies—even if this is often eclipsed by the priority given to student safety. Yet, legal and ethical duties to protect teachers cannot be ignored, especially as the threat landscape changes with digital technology. Imagine a teacher delivering a lesson, only to later discover that real-time pornographic deepfakes of her were generated and circulated via smart devices during the class, a crime enabled by tools now present in many classrooms. Or consider a manipulated video depicting her making disparaging remarks about students, fabricating evidence to damage her reputation and professional standing (Arantes, 2023). These are not minor incidents or workplace misunderstandings: they are serious cybercrimes that violate personal dignity, bodily autonomy, and the right to a safe workplace.
Evidence shows deepfakes are overwhelmingly gendered in their impact, with most victims being women (Ajder et al., 2019; Paris and Donovan, 2019). Educational systems must therefore ensure that policy explicitly protects the rights of female-identifying teachers to a workplace free from such abuse under the mandate of gendered violence. Manne (2017) describes misogyny as the enforcement arm of patriarchy, and to ignore the threat of pornographic deepfakes against teachers is to perpetuate and internalize these misogynistic structures. Digital technologies have only intensified this dynamic, enabling new forms of discrimination, intimidation, and profound psychological harm, which has been evidenced in a broad body of research (Burdon and Harpur, 2014; Datta et al., 2018; Köchling and Wehner, 2020; Taylor, 2017). As international jurisdictions move to regulate psychosocial risks associated with deepfakes (Albrecht, 2016; Anwar and Syafiq, 2023; Reid et al., 2023), schools must expand the concept of a “safe classroom” to foreground teachers’ workplace rights. This means critically interrogating the patriarchal structures in female-dominated workplaces and surveillance technologies that enable gendered violence and ensuring new governance models mitigate risks for those using educational technologies or whose images appear in school materials. This is not a women’s issue—it is an issue of gendered violence in the workplace. A truly safe classroom, then, is rooted in a critical feminist approach: resisting surveillance and control, centering teacher safety, and advancing policies that rigorously address digital abuse. Ongoing policy development, clear consent protocols, and robust workplace protections are now essential to respond to the evolving harms of digital manipulation and cybercrime in the classroom.
Vignette 2. From promotion to peril: Monetizing student images without adequate safeguards
Schools use imagery to craft a collective identity, showcasing vibrant campus life and shared achievement to invite families into a sense of belonging. Brochures, websites, and social media project curated scenes of connection and success, positioning students as central to the school’s story—ultimately to attract enrollments. But as these images shape the school’s public narrative, the ethical risks grow sharper. The dangers are no longer hypothetical: “We’re seeing synthetic child sexual abuse material being reported through our hotlines, and that’s particularly concerning to our colleagues in law enforcement, because they spend a lot of time doing victim identification so that they can actually save children who are being abused,” warns Australia’s eSafety Commissioner, Julie Inman Grant (Swan, 2023). With tech giants like Apple, Google, and Meta facing new legal requirements to tackle deepfake child abuse content, schools must also critically reflect on their own practices. The design of digital environments, and the choices schools make in pedagogy and marketing, will be key in protecting children as new safety standards emerge, both in Australia and beyond.
Feenberg (1991) reminds us that economic and technological developments are always bound up with questions of power and control. This is a critical lens as schools confront the looming threat of deepfakes. The Responsible Metaverse Alliance cautions, “There is currently no assurance for users that Metaverse and AI products are going to be built with the necessary guardrails, controls, or restrictions that can ensure user safety” (Policing the Metaverse, June 2023). Until institutions can truly manage the risks around image use and misuse, school boards and steering committees must pause and ask whether they have genuinely secured “informed consent” for student and staff imagery and whether it’s time to fundamentally rethink what a “safe” classroom really means.
What does safety in the classroom now require? Garran and Rasmussen (2014) highlight the foundations: consent, privacy, and freedom from judgment, all of which are under new threat. Creating a genuinely safe classroom may mean banning technologies that capture real student images, using only AI-generated stock photos in school marketing, or making it explicit when visuals are not of actual students. It may also demand tougher regulations for edtech companies and full transparency about student protection in product design. Most importantly, safeguarding students requires policies that rigorously address consent for the use of images, especially when used for marketing or profit. As Kikerpill et al. (2021) argues, every crime in technology-mediated environments is, at its core, a communicative act. Schools must confront this reality by addressing the risks of pornographic deepfakes as a matter of policy and prevention, recognizing that the burden of responsibility cannot fall solely on teachers, students, or others, when those making decisions about the publication and use of student images are the schools’ marketing department.
Discussion and concluding remarks: Reconsidering safe classrooms alongside deepfakes
The proliferation of affordable, real-time deepfake technologies has profoundly unsettled the notion of classroom safety, demanding a fundamental rethink of governance in educational settings. As Kardos (2025) critiques, educational policy discourse overwhelmingly frames AI-generated pornography through the narrow paradigm of non-consent (to engage in the pornography), failing to address the broader structural and historical harms rooted in technology-facilitated gender-based violence. This depoliticized approach isolates deepfake harms as “women’s issues,” obscuring their entanglement with institutional power, profit-driven communications, and patriarchal norms. Yet, as Vignette 1 in this paper illustrates, the classroom is not simply a student-centered space, but a workplace where teachers, where predominantly women, face unique psychosocial and gendered risks from digitally mediated abuse. Vignette 2 exposes how the monetization and dissemination of student images for branding and community-building similarly institutionalizes new forms of risk and consent violation, enabled by opaque communications agendas and digital infrastructures. A genuinely holistic approach to safety requires shifting policy beyond technical or compliance-based fixes. It requires us to foreground the occupational health and psychosocial safety of teachers alongside the rights and protections of students. Current policy frameworks, shaped by technocratic and neoliberal logics, remain fundamentally inadequate.
This is a significant finding, as the consent paradigm, as shown by Kardos (2025), is insufficient in the face of pervasive, technology-enabled image abuse. That is, neither teachers nor students can meaningfully “consent” to the myriad uses, manipulations, and exposures now possible. Further schools, through the normalization of image-sharing for market and administrative purposes, become unwitting facilitators of these risks. Evidence of this risk can be seen in Australian primary schools where studies of privacy and app use reveal limited oversight and inconsistent consent protocols (Rennie et al., 2019), and in broader institutional practices where the enforceability of social media consent policies is highly questionable (Hanlon and Jones, 2023). In practice, this means that schools may rely on consent forms that parents do not fully understand, or that do not cover the downstream uses of images, leaving students and teachers exposed to potential misuse in digital environments. Thus, policy must move beyond individual responsibility and begin to interrogate and reform the structural, economic, and gendered dimensions of safety, power, and image governance in education. As such, the two vignettes foreground three imperatives: first, that psychosocial safety for teachers and students must be equally and intersectionally embedded in governance frameworks; second, that policy must directly address, rather than depoliticize, the harms of AI-generated sexual abuse as part of broader histories of violence and inequality; and third, that enforceable, anticipatory, and feminist approaches to consent, image use, and digital safety are needed to counter the operational autonomy of nefarious actors.
Good governance of AI for education now demands a shift from student-centered protection to a holistic approach that recognizes the classroom as the teacher’s workplace and a site of psychosocial risk when smart devices are present. While digital policy and educational discourse have long prioritized student welfare, the rise of AI, synthetic media, and image-based abuse now exposes teachers, predominantly women, to unprecedented forms of gendered cybercrime and reputational harm. Simultaneously, students are subjected to institutionally sanctioned risks, driven by a communications-for-profit agenda that fundamentally misconstrues the meaning of safety in educational environments. Arguably, safety and vulnerability in education are increasingly subordinated to profitability and patriarchy, as imagery and communication become entangled with the complex realities of pornography, issues that cannot be reduced to “women’s issues.” Rather, these are matters of consent for marketing departments and occupational health and safety for workplaces. Addressing these harms requires not only technical safeguards but also a fundamental rethinking of how authority, consent, and workplace rights are negotiated in digitally mediated classrooms.
The notion of a safe classroom when deepfakes are apparent is complex and multifaceted. However, it is crucial to emphasize that the safety of not only the students but also the teachers must be spotlighted. There is an imperative to establish a learning environment that prioritizes the psychosocial safety of everyone involved. The classroom is the teachers’ workplace. In the context of classroom safety, the distinctive aspect of this discourse is that the notion of “safety” lies not only in the authority over educational resources and control of learning conditions but also in how we train, respond, and engage with instances when those learning conditions are nefariously manipulated beyond our control. As current discourse that problematizes technology in education is “located in an omnipresent system of power asymmetries, it will also serve the interests of those in power” (Rahm, 2023) and Manne (2017) contends that in a patriarchal society, women’s actions are governed by misogyny; policy discourse is at risk of a self-fulfilling prophecy.
Past discussion, rather than challenging the existing power structures whereby the platforms are accountable for indirect discrimination, much of the discourse to date has inadvertently worked in favor of those who hold power. As the critique it has been co-opted by those in authoritative positions (Perrotta and Selwyn, 2020). Interest now lies in considering how images are used beyond administrative purposes and marketing collateral. Our policy landscape must consider the impacts and implications of these emergent technical aspects that are now becoming mainstream, to restructure the principles governing how, when, and who can use images and video in education. As part of the strategies employed by educational leaders to improve institutional outcomes through digitalization, we must also give rise to an argument that shifts emphasis from the technical subject-object hierarchy towards a focus on safety. As this paper has shown, this repositioning is essential because external and nefarious actors may replace educational leaders in the control of staff and student imagery.
In an instance whereby a deepfake is created, we see it representing a novel form of “impersonal domination” (Feenberg, 1991) that lies at odds with established educational governance and legal frameworks that often focus on neoliberal discourse. We may question if the techniques employed by external actors using deepfake technology in individual classrooms extend to the broader education sector, and as such, it’s governance structures. This consideration arises particularly when institutions encourage or mandate the sharing of imagery featuring teachers and students. Or does educational governance stop at the physical school door and the impersonal domination of teachers and students by deepfake creators now sit within the institution’s remit?
By considering the educational environment as a space when the subject is admonished to technical rule, the concept of the “safe classroom” is transferred beyond educational settings and related to misogynistic policing in a patriarchal society. Thus, pointing towards the entire development of modern educational institutions marked by unqualified control over images and video, and an environment where “himpathy” (Manne, 2017) rules over women’s rights safety. This control, discussed by Feenberg (1991) as “operational autonomy,” grants freedom to those that nefariously use deepfakes, to make independent decisions about images, irrespective of teacher or students’ views or the school’s community interests. Operational autonomy safeguards them from the consequences of their actions and allows them to perpetuate the conditions of their authority over the student and teacher with each iteration of the deepfake they share.
The possible risk of images and videos generated in educational settings being used without consent for nefarious purposes, deliberately or inadvertently, emphasizes the importance of implementing strong policies and guidelines that adhere to safety. Transparent communication about the potential safety risks within the school community is crucial to address and mitigate this concern. Moreover, there is a pressing need for educational institutions, in their utilization of images of students and staff, to acknowledge the growing concern regarding the potential misuse of personal information. As Rahm (2023, p. 67) states, “An efficient collective actor is one who can adapt to the system society – a holistic system made up of many components, shaping a totality, which is more than the sum of its parts. The system society preceded computerization, but is strongly compatible with it – in effect, they have a reciprocal relationship.” Within this context, students and teachers are recognized as collective actors likely to fall victim to deepfake technologies, given the institutional choice to share their personal information in the form of pictures, images, and videos and the gendered nature of teaching. They have a reciprocal relationship, that while current policies imply that consent is secured through signed consent forms (Hanlon and Jones, 2023; Rennie et al., 2019), the actual risk of harm and potential breaches of consent arising from the integration of videos and images through deepfake technology remains insufficiently addressed in governance structures that use image in marketing collateral and pedagogies influenced by educational technology. Therefore, this discussion highlights the crucial role of schools as unintentional contributors to the dissemination of online images and videos. This inadvertently aids malicious actors in generating deepfakes, emphasizing the need for a more thorough examination and addressing of the associated risks of the ways schools engage in the neoliberal push for “selling their product”—the risks to teachers and students.
This discussion has foregrounded the need to reconsider the policy parameters of “safe classrooms” in the era of deepfake media, drawing attention to how both teachers and students are rendered newly vulnerable through technologically mediated violence and institutional practices. Through Vignette 1, this paper demonstrated that treating the classroom solely as a student-centered space obscures teachers’ rights to occupational health and safety, especially when deepfakes constitute a form of gendered workplace violence, violence that is disproportionately targeting women. Vignette 2 revealed how the monetization and dissemination of student images as community-building or branding exposes students to institutionally endorsed risks. The communications-for-profit logic now arguably erodes the concept of informed consent. Together, these vignettes underscore three key findings: first, that psychosocial safety for teachers must be equally prioritized alongside student safety in all governance frameworks; second, that contemporary policy, rooted in technocratic and neoliberal rationalities, is fundamentally inadequate for addressing the harms introduced by AI-generated deepfakes; and third, that in the absence of intersectional, anticipatory, and enforceable policy responses, schools risk becoming active sites of operational autonomy for nefarious actors.
Policy recommendations
To move from critique to action, three governance priorities are clear. First, schools must mandate Privacy Impact Assessments (PIAs) for all staff and student image use, operationalized through image-use policies that require opt-in, purpose-specific, and time-limited consent, with revocation mechanisms. These policies must make explicit to parents and carers the risks posed by deepfakes, and it is suggested that they align with Australia’s eSafety Commissioner’s Safety by Design framework and incident response guidance. Second, occupational health and safety standards should be updated to formally recognize deepfake technology-facilitated abuse (TFA) as a psychosocial hazard. By doing so, educational institutions are obligating education systems to provide staff with reporting pathways, protective measures, and post-incident support alongside student protection. Third, procurement processes must mandate vendor compliance with Safety by Design principles, such as watermarking, detection, and takedown protocols. This final step is to ensure that technological safeguards are embedded at the point of adoption rather than retrofitted in response to harm. Taken together, these measures are anticipatory governance for education and design to provide an enforceable policy response that embeds psychosocial safety for teachers and students in the everyday governance of education.
Conclusion
This paper has critically examined how the proliferation of deepfake media, enabled by AI, fundamentally disrupts traditional notions of classroom safety and exposes serious gaps in educational policy and governance. Building on Kardos (2025), this analysis has shown that the issue is not merely one of non-consent or a “women’s issue.” Rather, educational policy must connect the harms of AI-generated pornography to broader patterns of pornography, gendered violence, and the commodification of digital identities. To do so, demands a radical rethinking of how authority, consent, and workplace rights are negotiated in digitally mediated classrooms. Drawing on critical feminist theory, Feenberg’s Critical Theory of Technology, and Manne’s analysis of misogyny, the analysis has illuminated how both students and teachers, particularly women, are newly vulnerable to psychological, reputational, and professional harms that extend far beyond physical safety or cyber-safety paradigms.
Through two educational imaginaries, I have demonstrated that these risks are not simply the product of individual malfeasance but are structurally enabled by profit-driven communication and workplace practices. The key findings underscore three urgent imperatives for policy and practice. First, that the psychosocial safety of teachers, not only students, must be foregrounded in all conceptions of classroom safety, with explicit attention to occupational health and gendered violence. Second, that current governance frameworks, rooted in technical, neoliberal, or student-centered logics, are insufficient to address the complexities of AI-generated harm, which is fundamentally relational, communicative, and entangled with broader systems of power. Third, that without rigorous, anticipatory, and intersectional policy intervention, schools risk perpetuating the very asymmetries and exclusions that deepfake technologies exploit. Policy imperatives to guide future research include, first, examining the impacts of mandating PIAs for all staff and student image use in schooling contexts; second, investigating the implications of codifying deepfake technology-facilitated abuse (TFA) as a recognized psychosocial hazard under OHS legislation; and third, exploring how educational stakeholders respond to a redesign of marketing consent practices. This means, shifting the opt-in, purpose-specific, and time-bounded consent frameworks that are currently in place to one that includes parents and carers being explicitly informed of the potential harms of deepfakes despite Safety-by-Design attestations by vendors (watermarking, detection, and reporting service levels).
As such, there is a pressing need for further research that moves beyond descriptive accounts of technological risk. While the United Nations Sustainable Development Goals call on us to “leave no one behind” by creating truly safe and inclusive educational environments, the findings of this paper suggest that the current realities of deepfake media, gendered harm, and profit-driven image governance significantly undermine this ambition. This paper calls for further research, first to investigate the lived experiences of teachers and students impacted by deepfake harms; second, to explore the limitations of existing governance structures, and the political economies that shape consent and control in educational settings; third, to engage in interdisciplinary and participatory inquiry to theorize new policy models, ones that center psychosocial safety, gender justice, and collective accountability for AI governance in education, for education.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
