Abstract
Generative AI is expansively used to create pornographic material. These images and practices are becoming a part of the sexual culture and have an influential impact on gender inequality. Many of the images are generated without the knowledge of the women in the material, and a considerable amount of them are child sexual abuse materials. GenAI and technology-facilitated violence have been in the scope of the United Nation's concern, and educational guidelines have been published in 2023. This paper sets out to conduct a feminist critical policy analysis on the most recent UN guidelines to investigate the discourses that dominate the response to GenAI. The analysis shows that the critique of GenAI often centres around the non-consensual nature of the generation and distribution of pornographic images. Furthermore, the discourse fails to connect AI-generated pornography to the issues of pornography in the first place, which gives a depoliticized and dehistoricized discourse about the problems of AI-generated pornography. The paper critiques the narrow focus on consent and explores feminist ways to approach the issues with AI-generated pornography. The paper also discusses implications for education and future policies.
Introduction
2023 was a year heavily impacted by the expansion of generative AI (GenAI). Although this technology has been around for a few years, its impact on everyday life has been exponentially growing since the end of 2022, when ChatGPT 3 became publicly available. AI-generated (also referred to as deepfake) nude images have also seen the biggest rise in 2023 (Lakatos, 2023). The two main reasons for this are, for one, that these generators have stepped out of the shadows of the dark web, and can now be used for free, and with minimum effort by anyone with internet access (Lakatos, 2023). For two, the technology has rapidly evolved over the past years and the generated images developed from pixely, uncanny pictures into real-looking images, perfectly enhanced AI avatars, and realistic pornography. A study looking into online deepfake videos found that 98% of them concern material depicting nudity or sexually explicit activities. 99% of these are of women (Security Hero, 2023). GenAI is also expansively used to create child sexual abuse materials (CSAM) (IWF, 2023). AI-generated pornography is a new genre in pornography, a new wide-spread digital practice
There has been growing interest to include pornography as a topic in comprehensive sexuality education (CSE) (Baker, 2016; Wilson et al., 2019). Evidence shows that adolescents are exposed to or consume pornography regularly (Rodríguez-Castro et al., 2021), which makes pornography a primary source for learning about sex (Rothman et al., 2018). Mainstream pornography has been linked to violence against women (VAW) both in the production, and in its impact on the sexual culture, gender norms, and sexual practices (Waltman, 2021). Research shows that school-based educational programs addressing pornography have positive outcomes related to attitudes and behavioural intentions among adolescents (Rothman et al., 2020). CSE moves beyond health aspects of sex, and includes discussions on gender equality, the sexual culture, and violence prevention. Some also suggest incorporating technology in CSE (Rodríguez-Castro et al., 2021), as young people’s relationships and sexuality are embedded in the technological environment as well. AI-generated pornography takes this entanglement to another level.
It is expected that GenAI poses challenges to education and to gender equality, as AI likely reproduces social biases (Dignum, 2019). Commonly, technologies are developed faster than policy and regulation responses (Dignum 2019), and GenAI is no exception. GenAI boomed during 2023, and the UN has been a leading organization in responding to the new challenges by producing reports and guidelines around digital developments and education (Tawil and Fengchun, 2024). As policies are developed, it is a critical question what knowledge is created about GenAI, what is problematized about this practice, and what solutions are offered (Bacchi, 2009). This paper sets out to explore the initial responses to the boom of GenAI and the educational contextualization of this technology by the UN. The analytical framework is Bacchi’s What’s the problem represented to be? (WPR) method, where the analysis is directed to problematizations in the documents. Problematizations are central to the practice of governing (Bacchi, 2009) and therefore worthy of analysis. This policy-interested case study analyzes how recent UN documents related to AI, GenAI, information and communications technologies (ICT), and education problematize AI-generated pornography.
One of the main findings is that AI-generated pornography is often problematized on the basis of its non-consensual nature. Consent has also been a central concept in CSE in the past few years. This article seeks to stimulate a nuanced understanding of what the consent paradigm problematizes and what it misses regarding AI-generated pornography. I argue that educational discourses need to respond to the challenges of AI-generated pornography, for an education for violence prevention, and gender equality. It is a matter of education because it is impacting children and teenagers as well. Furthermore, I argue that pornography and AI-generated pornography are inherently intertwined in the contemporary sexual, and digital culture and that the educational approach to these issues should reflect this. GenAI is providing us with new evidence for the harms of pornography, which could be a vantage point for critical discussions, and educational responses as well.
Image-based sexual abuse, child sexual abuse material, and AI-generated pornography
In this paper, I will use the term AI-generated pornography, by which I understand both images and videos. I contextualize AI-generated pornography in the realm of image-based sexual abuse; therefore, I give an overview of this phenomenon.
The distribution of private sexual images is not a new practice. The non-consensual distribution of sexualized images facilitated by technologies has become a global phenomenon (Hall and Hearn, 2018). The terminology has evolved with research expanding our knowledge about the nature of this practice. The term ‘revenge porn’ was used to describe cases where the (ex)partner shares nude images after the relationship deteriorated (ibid.). Recently, the term non-consensual dissemination of sexual images has substituted revenge porn in the literature (Sciacca et al., 2023). This term describes all cases where a sexual image is sent to others or posted somewhere without the consent of the person in the image. This practice is on the continuum of sexual violence (McGlynn et al., 2021), and it has been associated with other forms of sexual abuse (Frankel et al., 2018). The non-consensual part of this practice is, in fact, often sexualized: non-consensual porn is a mainstream pornography category (Vera-Gray et al., 2021). Moreover, non-consensual dissemination of sexual images is a common practice among adolescents as well (Sciacca et al., 2023).
Image-based sexual abuse is a term more widely used in the field of criminology (DiTullio and Sullivan, 2019). This term also acknowledges that victims of this practice went through a form of sexual abuse (ibid.). Image-based sexual abuse is a wider term than non-consensual dissemination of sexual images, as it refers to the creation, distribution, and threat of sharing someone’s sexual or naked image (Henry and Powell, 2018). Legislation is lagging behind in responding to this issue. A comparative analysis of the current European laws on the non-consensual distribution of sexual images found that the majority of the current legislative processes do not treat this practice as sexual offence but only as a minor privacy violation offence (Mania, 2024).
The impact of image-based sexual abuse on mental health is dire: victims report suicidal attitudes and behaviour; social anxiety; trust issues; depression (Bates, 2017; Campbell et al., 2022; Sciacca et al., 2023). Loss of control over the pictures and where they are distributed prove to have an overwhelmingly harmful impact (Bates, 2017). GenAI technology makes it even harder to control what happens to one’s images.
GenAI rewrites the scene of image-based sexual abuse. The creation of fake sexual images is a form of abuse: ‘it is an invasion of privacy and a violation of the right to dignity, sexual autonomy and freedom of expression’ (Henry, Powell & Flynn, 2018). To generate nudes with AI, there need not be an initially shared naked image, as GenAI apps utilize others’ nude images to create new ones.
The generation of sexual images with AI became an easily accessible and widely used practice in 2023 (Lakatos, 2023). Nude-generating apps, bolstered by AI, have become a full-fledged online industry, utilizing advanced marketing and monetization techniques. A study found that a group of 34 GenAI apps that create nudes received over 24 million unique visitors to their websites, just in September of 2023 (Lakatos, 2023). The report also warns that the increasing prominence of these services will likely lead to further harms to women and the generation of CSAM (ibid.).
Research has already found that a prominent part of AI-generated pornography is CSAM. The National Center for Missing & Exploited Children in the US received 4700 reports of AI-generated CSAM over 2023 (NCMEC, 2024). This report showcases prompts that users input to GenAI models, and the abusive use of these images is evident. Another recent study revealed that the large image-based dataset, LAION 5b, contains a significant amount of images of suspected CSAM (Thiel, 2023). This dataset is currently the largest open source dataset that has been used to train leading AI image-makers such as Stable Diffusion. The Internet Watch Foundation (2023) found that on a dark web forum for CSAM, the majority of the generated CSAM were of naked female children; and the most represented age group was 7–10 year-olds.
Image-based sexual abuse is a gendered issue and is a form of violence against women. Women are more likely to become victims of this abuse, while men are more likely to be perpetrators (Eaton et al., 2017; Karasavva and Forth, 2021), which is also the trend among adolescents (Wachs et al., 2021). The impact of this abuse also differs, and women in general experience greater depressive symptoms (Sciacca et al., 2023). Deepfake pornography targets almost exclusively women (Ajder et al., 2019), which is in line with historical patterns of technology-facilitated gendered abuse (Giugni, 2021). It is also a form of violence against children (NCMEC, 2024).
I use the term AI-generated pornography in this paper. This includes pornographic images or videos where the face of the person is of an existing and recognizable person; those that depict non-existent people but who look like real people; and AI-generated CSAM. It is important to note that while here I refer to all these different types with an umbrella term, I do think there is an important difference between images where the people are existing individuals and where we see non-existent, realistic representations of human beings. Furthermore, that CSAM, and child pornography is and should be a separate legal and cultural category. AI-generated CSAM that does not display images of real children still is child sexual abuse, and the models do use images of real life CSAM. In this paper, I refer to all of these categories under this umbrella term because much evidence supports that the porn industry builds up from all these different elements, and I argue in this paper that they all should be problematized as pornography.
These materials are generated with deepfake technology which is ‘the most advanced and realistic form of synthetic media’ (van der Sloot and Wagensveld, 2022). I chose the term AI-generated pornography because deepfake pornography is a widely used term in the public debate and often only face-swapping technology is understood under it.
More research has to be done about how adolescents interact with AI-generated pornography. A Belgian study found that how young people interact with AI-generated pornography is also gendered – more young men have seen it or have tried creating it (Van de Heyning et al., 2024). Furthermore, young people reported that they regularly encounter these images on social media (ibid.). Some cases where AI-generated nudes of female students were distributed in schools have also been reported on news platforms. 1
Education discourse and the UN
The discussion about how to build and regulate AI responsibly is multi-faceted, and education has an important role in the ethical use and development of AI (Dignum, 2019). Therefore, educational policies related to AI are political – they are both impacted by and reproduce general discourses.
GenAI and technology-based violence have also been in the scope of the UN’s concern. The UN is a strong political power with tremendous influence on global discourses. The problems the UN defines create discourses that have a strong global impact on policies and action plans. UN discourses have a financial impact as well, as they impact how funding applications may be expected to formulate problems and initiatives. Furthermore, the UN has an important link to other transnational organizations which define political processes such as the OECD and the World Bank (Lingard and Rizvi, 2009). The UN is an advocate for digital education that applies a human-centred approach (Tawil and Fengchun, 2024), CSE, and gender equality (UNFPA, 2024 5 ). Since the UN discourses have a great impact on education globally, research investigating these discourses can contribute for further development.
Methodological and theoretical considerations
This study is a policy analysis that explores the educational discourse UN policies create about AI-generated pornography. The analytical model is Bacchi’s policy analysis framework, the WPR approach. In this framework, policies are viewed as productive practices that produce and constitute problems (Bacchi, 2009). Policies have both cultural and historical meanings – as they give an interpretation of phenomena at a given time. Furthermore, policies are political practices that define, prioritize, and produce knowledge about given phenomenon, as well as propose solutions (ibid.). In this framework, which belongs to the critical policy analysis tradition, policies are viewed as discourses that create taken-for-granted truths about the world (Ball, 1993).
Furthermore, this study applies a feminist lens and investigates how the status quo is reproduced in the discourse: how gendered norms and roles, normalized misogynistic cultural practices, and existing power structures may be detected in the policies. The feminist ethics here is one that prioritizes the rights of women and girls to be free from sexual objectification, exploitation, and abuse. The WPR framework leads to a political analysis – the analysis is conducted with the explicit aim of taking the side of those who are harmed. The goal is to challenge problem representations that have harmful effects and to suggest different ways to represent these problems (Bacchi, 2009).
Bacchi’s approach was selected because it forefronts that policies are never neutral. Rather, policy is understood to be an on-going and interactional process (Ball, 2013). The aim of this analytical work will be to explore what values and ideologies render the problems represented around pornography in different policies and curricula. The analytical units are the represented problems, which will be examined both as individual units that constitute discourses, both elements for comparison across the different policies.
To my knowledge, similar policy analyses have not been conducted on UN materials regarding GenAI, and none to date have adopted Bacchi’s WPR approach.
The 7 analytical steps of WPR
This framework applies the following questions for the analytical work: 1. What’s the problem represented to be in a specific policy? 2. What deep-seated presuppositions or assumptions underlie this representation of the ‘problem’? 3. How has this representation of the ‘problem’ come about? 4. What is left unproblematic in this problem representation? Where are the silences? Can the ‘problem’ be conceptualized differently? 5. What effects (discursive, subjectification, and lived) are produced by this representation of the ‘problem’? 6. How and where has this representation of the ‘problem’ been produced, disseminated, and defended? Has it been and /or how can it be disrupted or replaced? Step 7: Apply this list of questions to your own problem representation.
Selection of the material
As the purpose of the study was to get an overview of the current educational discourse of the UN on AI-generated pornography, UN guidelines regarding education and AI were included. Some documents more broadly addressing ICT and education were also selected, to get a more comprehensive view on what problems are represented in the discourse (Appendix 1 – Empirical material). These were Reimagining our futures together: A new social contract for education, 2021 and Guidelines for ICT in Education Policies, 2022. These two were included because of their overarching approach to technology and education. The latter was selected due to its scope being education policies, which are the empirical material of this study. The former establishes what directions the UN educational discourse should take when it comes to technology. This document is referred to in the Guidance for Generative AI in Education and Research as a building block; therefore, it was relevant for this analysis.
The study is also comparative and intentionally explores intertextuality between different materials. Policy is intertextual by nature (Ball, 1993) and the selected documents indeed referenced other policies, legislation, interventions, and strategies. Furthermore, UN documents often refer to other UN documents. The core document for all education and AI-related guides is the Beijing Consensus on Artificial Intelligence and Education (UNESCO, 2019) as well as the Recommendation on the Ethics of Artificial Intelligence (UNESCO, 2021). In this analysis, these different policies are treated as one discourse and trends as well as discrepancies are pointed out in this discourse.
For this paper, no policies about sexuality education were analysed. This is because the UN addressed GenAI in these documents and not in their global sexuality education discourse. However, the goal with this was also to see if there is any contextualization of GenAI in the sexual culture and links to gender-based violence. The selection of documents is not exhaustive; however, the most recent and prominent documents are included.
Analysis
I embarked on this analysis with a problem in mind: to explore how AI-generated pornography is defined, in what context it is mentioned, and what solutions are offered. The analysis has a narrow scope: the representation of AI-generated pornography.
Not all WPR questions led the analysis equally. I identified questions 1, 2, 4, and 5 as most closely aligned with the study’s aims, allowing me to interrogate the problems these texts identify, how the ‘problem’ of AI-generated pornography is represented, and the underpinning presuppositions, silences, and effects of these problem representations. I have not examined how the representation of these policies had been produced, disseminated, and defended (WPR Q6); and WPR Q3 is not present separately; however, I believe, it is blended in the analysis.
Q1. What’s the problem represented to be?
To understand how AI-generated pornography is represented in the discourse of the UN, I first started looking for general problem representations to investigate how this problem is present; and what other problems are highlighted. This is important to get a comprehensive understanding of the discourse when it comes to GenAI, to allow intertextual analysis, and to compare problem representations to each other and available research evidence.
The report Technology-Facilitated Gender-Based Violence in an Era of Generative AI (UNESCO, 2023) explicitly declares AI-generated porn targeting women and the public dissemination of these images to be the most concerning issue of GenAI: ‘Text-to-image generative AI models make it easier to generate realistic-looking images of women in scenarios and situations that they were not in or did not consent to. (…). This is an attack vector that lends itself very easily to creating “fake” narratives, spreading misinformation, and most concerningly, generating AI porn by targeting specific women with images that may be publicly available’ (UNESCO, 2023, p. 23-24).
Furthermore, the report discusses that deepfake pornography is not a new issue and points out the long-standing lack of reflection and regulation regarding these materials: ‘Before ChatGPT and Stable Diffusion, deepfake pornography was built on freely available code. Multiple attempts at monetizing pornography generators have occurred in the last few years, including celebrity porn generators, images featuring fake women, and more malicious tools to create pornographic images of any individual (of course, this was used overwhelmingly on images of women). (…). It’s hard to say whether now at-scale issues of misinformation and disinformation via deepfakes would be as pervasive if platforms had addressed the issue of deepfake pornography generation years ago’ (UNESCO, 2023, p. 11).
The report on technology-facilitated gender-based violence (TFGBV) therefore flags the proliferation of AI-generated pornography as an important scope of the work regarding VAW.
The report promotes a comprehensive, multi-actor solution and guideline package to mitigate the harms of GenAI. It also highlights the role of education: ‘Take advantage of Media and Information Literacy programmes regarding falsified online content. Education remains a crucial component in understanding the reach, impact, and consequences of TFGBV, and when made available by content providers, distributors or policymakers, should be consumed to increase awareness’ (UNESCO, 2023, p. 28).
However, the educational materials analysed here, more or less, overlook the issue of AI-generated pornography.
The Guidance for Generative AI in Education and Research (UNESCO, 2023) does not explicitly mention AI-generated pornography but does mention image-based abuse facilitated with GenAI: ‘GenAI is making it easier for certain actors to commit unethical, immoral and criminal acts, such as spreading disinformation, promoting hate speech and incorporating the faces of people, without their knowledge or consent, into entirely fake and sometimes compromising films’ (UNESCO, 2023, p.17).
The document Reimagining our futures together: A new social contract for education (UNESCO, 2021) centres the future and the purpose of education around the overcoming of discrimination, exclusion, and marginalization. It declares education as a driving force in social transformation, which must be dedicated to gender equality. This document also emphasizes the importance of comprehensive sexuality education which ‘promotes respectful relationships and equality’ (p. 66).
This material also acknowledges that many girls feel excluded from education through the possibility of physical or sexual harm; however, it does not mention online harassment. While in the report about TFGBV, online abuse is acknowledged as a global issue, this slips under the radar in the educational discourse of the UN:
‘While TFGBV varies geographically, it is consistently demonstrated to be a problem across the world. A global study (Plan International, 2020) estimates that 58% of young women and girls globally have experienced online harassment on social media platforms’ (UNESCO; 2023. p. 6).
‘(E)xposure to inappropriate content involving graphic violence and sexual imagery’ is conceptualized as one of the threats to the cyber security and well-being of learners in the Guidelines for ICT in Education Policies and Masterplans (UNESCO, 2022, p. 74). The problematisation here is that children are exposed to this kind of material, and age restriction on websites is recommended as a solution. This is a valid argument, as an overwhelming amount of research shows that children are exposed to pornography by the age of 11 (Dines, 2010). However, the document does not mention pornography or Deepfake pornography specifically, nor does it mention that young people are not just ‘exposed’ to these images but they are part of their sexual practices - which would require more comprehensive educational responsibility around this topic.
The main conclusion this analytical question led to is that there is a discrepancy in the discourse of the UN around AI-generated pornography. While the report about TFGBV flags this as one of the most concerning issues with GenAI, the educational discourse mainly neglects this topic. It could be argued that this is because the main focus of these policies is the integration of ICT and AI in education; therefore, AI-generated pornography is out of their scope. However, the educational goals of the UN clearly state that education should be about AI systems as well as AI ethics. Moreover, the report on TFGBV emphasizes the role of education in combating TFGBV, specifically highlighting the role of media and information literacy (UNESCO, 2023). Furthermore, evidence also supports the need for inclusion of AI-generated pornography among educational issues, as it impacts children, women, and school environments.
The lack of focus on AI-generated pornography in the educational discourse keeps this issue isolated and renders it as a ‘women’s issue’, removed from the general discourse. This leads to a depoliticized educational context around AI – if the real harm of it only appears in specific reports but is not integrated into more general policies, it is hard to expect change. In the education discourse, one of the most concerning issues with GenAI is nearly invisible. Furthermore, the lack of discussion of AI-generated pornography also renders it an ‘adults’ issue’, while it is a part of young people’s reality as well.
Q2. What deep-seated presuppositions or assumptions underlie this representation of the ‘problem’?
After identifying problem representations in the material, the second step was to consider what meanings and ‘deep conceptual premises’ needed to be in place for such contextualizations and what this could reveal about inherent power relations (Bacchi, 2009).
This analytical question led to some emerging themes in the problematization of AI-generated pornography. As established in WPR Q1, not all materials deal with this issue. This question zooms into those that do and analyzes the represented problems more deeply.
Non-consensual
‘Text-to image models can easily generate images of women in situations they did not consent to being in, thus creating a more realistic vector of image-based abuse’ UNESCO, 2023, p.3, No 5).
The UN-discourse mainly problematizes AI-generated porn on the basis that women are portrayed in non-consensual scenarios. This conceptualization implies that the main source of harm these images cause is rooted in the fact that women did not consent to them. Consent has become a central concept in navigating sexual abuse and is a core concept in data privacy in the digital realm, too. This paradigm also seems to lead the discourse around image-based sexual abuse, and it is evidently the main problematization of AI-generated pornography in the UN-discourse as well.
AI-generated pornography uses not just the input image, which might have been acquired consensually or non-consensually by the creator, but the GenAI model is also trained on already existing images. The Guidance for Generative AI in Education and Research (UNESCO, 2023) problematizes this fact: ‘GenAI models are built from large amounts of data (e.g. text, sounds, code and images) often scraped from the internet and usually without any owner’s permission’.
While the document does not specifically mention AI-generated pornography, this section applies to those images. These images are often scraped without the consent of those in the images and, most often, without their knowledge. The discourse makes this into an educational problem: teachers and learners should be aware of the ethical issues and consequences of using GenAI (UNESCO, 2023).
Fake
The material discusses that GenAI technology produces such realistic images that it is impossible to differentiate between real and generated images. This is extended to AI-generated pornography, which contributes to a discourse that the problem with AI-generated porn is that it is fake.
This is the main problem when it comes to fake news and synthetic histories. In those cases, indeed lies are distributed, and people’s perceptions, knowledge, and decisions are manipulated. However, it does not fully grasp the problem with AI-generated pornography, where the main problem is not that it is fake. The evidence of harm caused by the public distribution of real sexual images shows this. If both real and fake pornographic images cause harm, then the problem is beyond this dichotomy.
Due to GenAI technology, now anyone can create pornographic images of any woman (without needing to photograph them), which causes harm (whether it is fake or real). This is present in the TFBGV report of the UN, however, and is missing from the educational discourse: ‘In a short period of time, GenAI has reshaped the discourse on AI and its impacts on society. It can create realistic text or video by a simple text input. (…) Previous iterations GenAI required coding skills – now anyone with internet access is only limited by their imagination’ (UNESCO, 2023, p.4).
Interestingly, the fact that AI-generated pornography is fake is often used to defend violent, sexualized materials, as supposedly no real women were hurt in the making (Öhman, 2020). Feminist critique has challenged this view: while abuse may only be ‘represented’ in AI-generated pornographic material, these materials exist because the dehumanization and VAW take place in reality (Richardson, 2022).
Embedded harm and malicious harm
Harms caused by generative AI are categorized into embedded harms and malicious harms in the UN-discourse. Embedded harms are not caused intentionally but due to biases in the training data, which then reproduce biased knowledge. Malicious harms refer to intentional harms. The report on TFGBV claims that the use of GenAI with the purpose to abuse women is done intentionally by malicious people. ‘While most individuals are building this technology for wide-ranging creative use to provide or derive well-intentioned services, it is already being used for harm by malicious individuals’ (UNESCO, 2023, p. 5).
This is an insufficient way to describe perpetrators of this kind of abuse, as it tells little about the structural nature of VAW. This framing hides the systemic nature of men committing abuse against women – and how this is routed in gendered relations, beyond personality traits. Furthermore, the ‘malicious’ framing also hides the fact that abuse is often done as the abuser benefits from it (Flood, 2019). It is furthermore not defined in the discourse what malicious means. It surely seems to include people who create these materials and then harass women with them. Does it include those who create the materials for their own sexual use, and the person who is used in the material is never aware of this material? Does it include those who develop these services, merely for their economic interest?
The dichotomy of embedded and malicious harm is also problematic, as the line between these can be blurry. The TFGBV report discusses the case of Lensa, an AI avatar app using Stable Diffusion, which was trained on LAION 5b. The app was critiqued by women, after they got sexualized images from the app even though the pictures they had uploaded were not sexual (Heikkilä, 2022). Lensa’s case is discussed as ‘particularly shocking’ (UNESCO, 2023, p.14). Given the amount, popularity, and persistence of pornographic images on the internet, it could have been expected that many pornographic images would end up in the dataset. Therefore, the company could have mitigated the embedded harm, before the dataset was made public. While the UN discourse seemingly discusses the responsibility of companies, it does not problematize the fact that capital interest is the driving force in the development of GenAI, among other technologies, which in many cases is built on female dehumanization (Richardson, 2022).
Q4. What is left unproblematic in this problem representation? Where are the silences? Can the ‘problem’ be conceptualized differently?
I have already discussed the limited focus on AI-generated pornography in the educational discourse of the UN. In this section, I look into the problematization of AI-generated porn and analyse the absences in the discourse.
Firstly, the topic of pornography is almost completely untouched in the discourse. The relation between GenAI and pornography is mentioned in the analysed documents once: ‘MIT Technology Review journalist (…), who is of Asian descent, consistently received semi-nude and sexualized images returned from Lensa (…), without her consent or prompting. (…) (I)t is a reasonable hypothesis that the persistence of Asian pornographic and sexualized content online influenced the model’s output’ (UNESCO, 2023, p.14).
The problem of AI-generated pornography is defined in the non-consensual and fake nature of these images. The documents do not address pornography itself as a women’s rights issue and do not link it to VAW, nor to racial oppression. Pornography is at the intersection of gender, class, and racial oppression, often depicting Black women and Asian women as slaves or in torture scenarios (Collins, 2020). South Korean women are by far the most represented in deepfake videos (Security Hero, 2023). While there is potential for an intersectional view in the UN discourse, it is not developed. This leads to depoliticized and dehistoricized argument about why AI-generated porn is so problematic – and without having a critical lens on pornography itself, the debate is left to the consensual/non-consensual paradigm.
Moreover, the documents lack a comprehensive vision of where and how education about GenAI will be present in the educational system. It also fails to frame TFGBV as an educational issue, including AI-generated pornography. The discourse calls for education for social and emotional literacy, and the discussion of respect and consent. This would be a great opportunity to integrate digital education and CSE, which research has called for already before GenAI (Rodríguez-Castro et al., 2021). Missing this connection creates a decontextualized and depoliticized GenAI educational imaginary.
Q5. What effects are produced?
Policies produce impacts that affect the subjects of the policies (Bacchi, 2009). When identifying these, it is important to keep in mind who can be harmed by these effects and who benefits from them. As mentioned before, AI-generated porn is dehistoricized and depoliticized in the UN discourse, as it is not connected to issues of pornography. Furthermore, the educational discourse almost completely ignores AI-generated pornography, which depoliticizes GenAI in education – given that AI-generated porn is one of the most concerning issues of TFGBV, and it increasingly affects women and girls. If AI-generated pornography is not problematized, educational processes can neither prevent this kind of abuse nor support victims of this practice. They will also fail to educate participants about ethics of sexuality, violence, and gender equality, in relation to GenAI.
Even though the material discusses that violence is not caused by technology but by the users, perpetrators often stay invisible in discourse. Similar to other discourses around VAW, the emphasis is on women as victims, and men (who are the main creators of AI-generated pornography) remain invisible. This is problematic when it comes to educational guidelines because pro-feminist approach to violence prevention is more efficient (Flood, 2019).
Discussion
As the analysis shows, the consent paradigm is strongly present in the problematization of GenAI. I will discuss this further now with a feminist lens to identify areas for policy development and change, that could encourage us to think differently about how AI-generated pornography could be understood and framed in further policies.
Today, consent has an extensive reach in the sexual culture and discourse, which received much critique from feminist scholars. The concept of consent has been abstracted and become the single ethical guide for sexual acts (Fischel, 2019). This is true for the sex industry, more specifically for pornography as well. Decades of research about the harms of pornography, its impact on VAW both on the structural-societal and on the individual level is ignored, once assumed that the woman in the scene consented. Seemingly, the problematization of AI-generated pornography inherited this discourse.
Many feminists have critiqued the limit of consent-based sexual ethics. MacKinnon (2016) questioned whether the concept of consent makes sense in a world where sex inequality persists, and it’s most pervasive in the sexual culture. Fischel (2019) suggests that consent is not a sufficient concept if we want to reach a more equal and feminist sexual culture. When it comes to describing the quality, and the mutuality of sexual experiences, consent fails again – as ethical and good sex is beyond whether both parties agree to it (Archard, 2022).
Recently, consent has become a central concept in sexual violence prevention efforts (Durbach and Kristen, 2017; Weale, 2014), and more broadly in sex education (Gilbert, 2017). Educational efforts are directed towards informing young people about consent, how to communicate it, and its importance for equal sexual encounters (Beres, 2020). The promotion of the consent paradigm has roots in sexual violence legislation; and it gained a great momentum in the broader cultural and political discourse after the MeToo movement. It has significantly impacted violence preventive work and sex education on policy, and practice level. Feminist critique has challenged both the negative model of consent (‘no means no’) and the affirmative consent model (where only enthusiastic yes means yes). The critique is targeted at the notion that this model creates of girls’ and women’ sexuality, namely, that they are respondents to boys’ and men’s desires (Gilbert, 2017). While the affirmative consent model acknowledges that consent can be forced in various ways, it still limits the sexual discussions to yes/no questions and keeps the gendered roles (Archard, 2022). Furthermore, educators have doubts about the sufficiency of these programs, as they presuppose that violence happened due to miscommunication (Beres, 2020).
The dichotomy of consensual/non-consensual distribution of sexualized images was already problematic before the era of GenAI – in questions such as what the person visible in the images consented to exactly; the coercive behaviours leading to people creating sexual images; and the room for victim-blaming conclusions this notion creates (Henry and Powell, 2015).
Framing the problem with AI-generated porn in the consent paradigm shifts the emphasis from the abuse. What if we tried to think of it as it has nothing to do with consent? What problems could we see with these images, with the apps that are created for this purpose, and with the action itself of men who create images like this of women?
In many of the cases where AI-generated pornography is used to harass women, the question of whether they consented is irrelevant. Even if women consent to the creation of a sexual material, as soon as it is used as a tool for harassment and abuse, consent loses all meaning, as well as its ground as a defence. Furthermore, in many cases, women are not aware of the creation of the material but only realize it once it is made public. In these instances, the consent paradigm yet again has nothing to offer for the measure of harm. Lastly, it is problematized in the UN-discourse that datasets contain images collected without the consent of those in the pictures. However, in the discourse about AI-generated porn, it is not problematized, that the women whose pornographic images are used to generate more pornography, likely never consented to this use of their imagery.
How intertwined consent became with sexuality also shows in that the consent argument to GenAI is only applied when it comes to sexual images. When the discourse discusses fake news and synthetic stories, the consent argument disappears – it is more taken-for-granted that the act is harmful by its function and the impact of it, regardless of the victim’s actions leading to it. This is because the consent argument has become the leading concept in deciding what is harmful – despite the multifaceted critique that this paradigm has received.
The consent approach to pornography has also received strong feminist critique. Firstly, sexual objectification, which always happens in pornography, is on the continuum of sexual violence (MacKinnon, 1983). Furthermore, there is overwhelming evidence about how consent is not sufficient when ‘given’ by women in disadvantaged social and economic positions, revealing the different coercive, forceful, and violent strategies of everyday of porn production (Dines, 2010). Furthermore, an overwhelming amount of evidence shows that porn consumption contributes to the persistence of gendered power hierarchy and VAW (Waltman, 2021).
Moreover, the sufficiency of the consent paradigm is questionable in the digital space, where it is not possible to consent to all uses of the images, as they can be shared and downloaded without further consent. These issues persist and likely aggravate with AI-generated pornography. It becomes more evident that consent is not a sufficient concept to be in the lead of a sexual discourse, politics, and education striving for gender equality and the elimination of VAW.
It was also discussed in the analysis that the topic of pornography is completely missing in the UN-discourse around AI-generated pornography. This silence depoliticizes this issue, as AI-generated porn is another evidence that pornography is a way to abuse women. Furthermore, the topic of pornography has been suggested to be a concern of both CSE and digital education. Research suggests that it still lacks general emphasis (Aznar-Martínez et al., 2024). Furthermore, there is not much research on how pornography is approached in sex education programs. Studies however show that school-based educational programs addressing pornography have positive outcomes in knowledge, attitudes, and intentions among adolescents (Rothman et al., 2020; Vandenbosch and van Oosten, 2017). The same results are shown in relation to violence prevention programs, which imply that CSE actively addressing gender equality can bring significant improvements (Aznar-Martínez et al., 2024).
Implications for practice and policy
Education science could serve as inspiration for educational policies about AI-generated pornography. Educational spaces, schools, should be a space for critical discussions about the sexual culture. The educational discourse of the UN promotes gender equality, and I argue here that evidence points to the need for including topics such as VAW, pornography, and AI-generated pornography as well.
Collective Shout, an Australian-based organization, which works against VAW and harms of pornography, has been addressing the harms of AI-generated pornography and deepfakes since 2019. 2 Culture Reframed, an expert organization about pornography education, has also published a report on AI pornography (2024). 3 The UN has a significant global influence, and it could work with organizations that have expertise on this topic, for an evidence-based education on GenAI that truly strives for gender equality.
The evidence about the harms of pornography, AI-generated pornography, and the proliferation of TFGBV calls for an integration between CSE and digital education. As GenAI becomes more of a scope and an everyday tool, educational spaces should not keep silent about AI-generated pornography. Research on the harms of image-based sexual abuse should drive the knowledge that is constructed in these policies. Critique around AI-generated pornography should apply an intersectional feminist lens. The UNFPA acknowledges that marginalized communities are more likely to be victims of this abuse (BodyRight Campaign 4 ). Furthermore, women on the input images, for example, women in porn, should be also considered as victims of this abuse, and this should receive more emphasis in the public discourse.
The UN discourse could also take inspiration from educational philosophy regarding GenAI. Karamercan et al. (2022) discuss that in digital spaces, human qualities seem to change into data. This is what happens with AI-generated pornography, where women’s bodies, sexualities, and sexual abuse are turned into data. One of the reasons VAW persists is the dehumanization of women that happens in multiple ways – AI-generated pornography seems to be the epitome of this dehumanization, both in the input data and in the output images. Beyond dehumanization, commodification of this ‘data’ also takes place, as AI-generated pornography is now a huge business. It is often said that GenAI is out of control, but it is not true. The problem is that the developments of these applications and the sex industry are controlled by actors who capitalize on VAW, and education should take a stance that we can resist the abuse.
Education about risks, sexual abuse, and pornography should not aim to make students scared of sex, or not seeing pleasure in it, nor should be about placing blame on them. The purpose should be so that they can have better sexual experiences, and it should treat students as capable of navigating their sex lives and the sexual culture. The educational discourse could draw on trauma-informed sexuality education approaches, which acknowledges trauma and abuse but strives for the possibility of pleasure in future sexual experiences (Lamb and Plocha, 2011). Discussing these topics can go against victim-blaming; framing abuse in gendered power relationships can lift personal shame or blame of victims. It also makes it clear that VAW can be prevented, and that perpetrators also have a choice in their relationships and sexual practices.
Educational places also have the potential and the responsibility to develop communities. The concept of intertextuality can help us understand how to see a community where students in different social positions share a space, and have different experiences with the sexual culture, and with AI-generated pornography. Intertextuality is a relational element that involves the bodies, histories, epistemologies, and ontologies of the different individuals, which all need supportive educational spaces (Karamercan et al., 2022). This can also be related to trauma-informed sexual education approach, which entails the awareness that there are trauma survivors and potential violence perpetrators in student communities (Lamb and Plocha, 2011).
A future based on equity requires an education that prioritizes critical consciousness, holistic development, and social justice (Lissovoy, 2015 in Aksakalli, 2024). Student-centred pedagogy, culturally relevant curricula, and community-engaged learning experiences should be promoted, that empower students to critically engage with complex issues in order to bring change in their communities (Aksakalli, 2024).
Conclusion
I argue that for a discourse that is striving for gender equality and the eradication of VAW, the harms of AI-generated porn must be connected to the harms of pornography in the first place. The generation of pornographic images has its historical and material roots in the pornography industry; therefore, it cannot be constructed without it. Different paradigms, and especially the sex industry, have tried normalizing and relativizing the harms of pornography to women and men, and the leading argument is built on the consent of the women in the industry. This analysis shows that the consent paradigm fails when it comes to AI-generated pornography, and it will not be sufficient to tackle TFGBV. The UN, as a major human rights agency that actively promotes gender equality, should take up the issues with pornography explicitly. AI-generated pornpgraphy should also be declared as image-based sexual abuse. The UN should also actively problematize the role of GenAI in the proliferation of CSAM.
I also argue that since sexuality and technology are more and more intertwined, CSE and digital education should have an integrated approach towards this. Global policies regarding technology and digital education have been fast to react to GenAI, while the global CSE education discourse of the UN is yet to react to AI-generated pornography. Specifically, CSE programs should foster a critical view of pornography and incorporate gender-based violence prevention. There is also a need to strengthen and disseminate the evidence that a curricular emphasis on gender, equity, and rights can lead to improved health outcomes (Haberland and Rogow, 2015). Research about internet safety education also shows that integrated educational approaches that address online and offline safety are more efficient (Finkelhor et al., 2021). This should be the approach of an education gender equality as well, because the power dynamics and abusive practices exist both in digital and in offline spaces.
Another important argument for including discussion on AI-generated pornography in education is that these images are already generated about young girls, and this abuse is likely to become more widespread. Therefore, schools are also spaces of victims of image-based sexual abuse, as well as of perpetrators. This why trauma-informed sexuality education approaches should be considered. This approach also indicates an intersectional feminist lens, as well as the consideration for those who are vulnerable to become perpetrators (Lamb and Plocha, 2011).
Feminist education and community building can support victims and bystanders, provide them with tools about how to deal with the occurrence of abuse, and tackle victim-blaming. It can also contribute to an environment where the creation and distribution of abusive images is not accepted and discussed. Furthermore, emphasizing AI-generated pornography as image-based sexual abuse in the discourse, policies can help teachers keep updated, and help them build solutions and ways to support the students in schools. For this, a comprehensive, political, and historical analysis of AI-generated pornography is needed.
Footnotes
Acknowledgements
I gratefully acknowledge the support of Dr Kathleen Richardson at De Montfort University, Leicester. She was my expert supervisor during the writing of this paper. We had many invaluable discussions on AI ethics and feminism which are the core of this paper. Furthermore, I am beyond thankful to my supervisors Anette Hellman and Jenny Bengtsson at the University of Gothenburg, who critically reviewed this paper and supported my writing process with comments and encouragement.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
