Abstract

In 2017, the United Nations Children’s Fund (UNICEF) reported that one in three users online were children. 1 The COVID-19 pandemic led to school closures in at least 188 countries, affecting 1.5 billion children and young persons. 2 Given the increased access to internet services 3 and positive parental attitudes towards children’s Information and Communication Technology (ICT) usage, 4 children in India have been reported to spend about three hours online every day. 5 While greater internet usage by children can provide significant developmental and educational benefits to children, it also increases the risks of online child sexual abuse (OCSA) and exploitation. 6 Reported cybercrimes against children in India surged by more than 400% in 2020 from 2019. 7 The latest data from the National Crime Records Bureau from 2022 shows that reported cybercrimes against children have increased by 32% in 2022 from 2021. Approximately 62.4% of these cases pertain to cyber pornography or the dissemination of inappropriate content. 8 Child sexual exploitation, 9 already a globally under-reported crime has now extended to occurring online: a survey of 18-20-year-olds in 54 countries, including India, by Economist Impact and WeProtect Global Alliance found that 54% of respondents had experienced at least one online sexual harm during childhood. 10 OCSA also occurs in various forms, including through online grooming, production of child sexual abuse material (CSAM), livestreaming of CSA, sextortion, and webcam child prostitution. 11
One of the reasons that has, in recent years, led to the exponential rise in CSA, is attributable to the development of artificial intelligence (AI); the rapid expansion of this technology has posed challenges, to child protection, of a magnitude hard to manage through human-led approaches. 12 AI technologies that enhance detection, response, and treatment are competing with those that can create content, automate grooming, and create previously unexplored legal complications. 13
Use of AI in the Proliferation of OCSA
November 2022 marked a pivotal moment in the evolution of AI with the release of the generative AI program ChatGPT by OpenAI. 14 At the same time, text-to-image AI models also saw significant improvements with newer versions of models such as Midjourney, DALL-E and stable diffusion being able to produce photorealistic images. 14 Machine learning and deep learning are two related technologies which help AI to get progressively better at tasks without having to be explicitly programmed by a human being.15,16 They rely on Big Data, that is, huge amounts of information, to operate correctly. 17 Big Data is mostly generated by people willingly using, posting and sharing on the internet. 17
AI-driven CSAM can broadly be classified into two types: (a) AI-generated CSAM that indicates new sexual images of fictional children and (b) AI-manipulated CSAM that indicates images and videos of real children altered into sexually explicit content. 18 With time, AI uses ML and DL to get better at creating both kinds of images.
In a snapshot study of one dark web CSAM Forum for a period of one month, the Internet Watch Foundation (IWF) found 20,254 AI-driven images. They assessed 11,108 of these images and found 27% of these images to be in absolute contravention of the United Kingdom’s laws against CSAM. 14 In an update to this report nine months later, IWF found that there was a significant increase in the number of images posted in the same forum which contained penetrative sexual activity, which is considered the most heinous of all CSAM according to the UK law. 19 They also found the first partially synthetic—“deepfake” AI CSAM videos in circulation, along with an increasing amount of AI-driven CSAM content on the clear web and extensive evidence for sharing AI models for generating images of specific children, including known victims of CSAM and famous children. 19 The National Center for Missing & Exploited Children’s CyberTipline has received more than 7,000 CSAM reports involving generative AI in the United States in the last couple of years, and they expect these numbers to grow. 20 Such proliferation in the nature and magnitude of AI-driven CSAM has been recently been propelled into public attention with the Europol, arresting two dozen people for their role in an international criminal group that distributed AI-driven CSAM—a first-of-its-kind operation. 21
Of late, many AI companies have imposed restrictions by limiting training data and banning prompts. However, open-sourced models like Stability AI cannot in practice prevent its models from generating images that it seeks to prohibit as the code is editable and the base models can be fine-tuned, that is, trained on further images. 14 Moreover, there are websites on the dark web which use built-in models to provide the service of generating CSAM, along with many websites which provide the service of “nudifying” images.14,22 There are also AI models which can be used “offline” to generate such images. 14 Images of famous children and victims of CSA on the internet are used to train these AI models as there is a large enough dataset involving such children.14,19 The efforts of perpetrators are strengthened by manuals existing on dark web forums and advice from fellow perpetrators on fine-tuning the generation of CSAM. 14 Thus, the existing dark web enabled AI CSAM landscape has enabled perpetrators to collectivize, become resilient and mutually strengthen each other’s perpetration efforts. Incidentally, in 2013, the child rights group Terre des Hommes created a computer-driven, virtual child called “Sweetie” to expose online sexual predators. 23 In the two-and-a-half months that Sweetie was active, 20,000 people attempted to chat with her online, including 1000 adults from 71 countries who were willing to pay Sweetie for virtual sex. 23 The scenario is likely to have gotten much worse with perpetrators being able to generate their own virtual children for abuse.
Furthermore, as many AI-driven CSAM are based on identified CSAM victims, they re-victimize these children, while other AI-driven CSAM that are based on famous children create new victims. 24 However, the threat of AI-driven CSAM is pernicious as it does not merely rely on existing CSAM. CSAM can be created without the need for training photos. 24 There have been instances of children creating deepfake nudes of classmates.24-26 A child psychiatrist in the US was convicted of using a web-based tool to create nude images of children he knew for sexual gratification. 27 A former school employee was also charged with using AI to create CSAM of children under his care, using their photographs that he had clicked or procured from parents. 28 As a consequence, criminals have begun using AI-driven images for sextortion, thereby not requiring to groom children online into sending real, self-driven CSAM. 25 Thus, a vicious cycle of OCSA is created—also one in which there is the potential of a “victim-less” crime, as models are capable of generating images which do not contain identifiable children. 29 At the same time, they are a threat, given that they encourage the propaganda underlying CSAM and seek to normalize the sexualization of children.29,30
Vulnerability and Risk in Children and Adolescents
There are many risk factors leading to making children being sexually abused and exploited online. Broadly speaking, they pertain to: (a) demographic characteristics of age, gender, disability, socio-economic status, and sexual orientation; (b) nature of online social relationships and engagement including use of chat rooms and decisions or behaviors relating to online chats and “sex talk” or sharing of personal information; (c) mental health vulnerabilities created either by psychiatric morbidity and/or past experience of abuse. 31 Other risk factors include normative adolescent developmental trends, 32 such as impulsive and risky behavior in any setting, increased awareness of sexuality, experiencing sexual arousal, and engaging in sexual activity (for some), online disinhibition and “seeking to fill a void” in their lives, that is, increased use of the internet sub-consciously to provide comfort from the negative effect of trigger events. 33 Yet, as highlighted by the numbers above, increased and regular exposure of children and adolescents to the internet is pervasive.
In addition to the above-described risk factors and vulnerabilities of children and adolescents in the context of potential OCSA, it is also important to understand how children and adolescents perceive risk online. Children under 11 years of age struggle to fully understand online privacy risks. 34 In a participatory study comprising middle-schoolers conducted in Denmark, it was found that children’s perception of online privacy was abstract, with only tangibly being related to strangers misusing their address information. 35 They were unaware of the extent of risks associated with disclosing personal information online. 35 Older children and adolescents (10–17 years) are likely to engage in risk-taking behaviors online, particularly in exploring romance and sex. 36 In a study by ECPAT Sweden involving nearly 13,000 children and adolescents (aged 10–17 years), it was found that children do not draw a clear line between life online and offline. The internet is a natural part of their life, and they perceive it as a place for socializing, learning, entertainment, support, activism, and community that melds with life beyond the internet. 37 As an extension, this means that flirting, falling in love, and sex will all be a natural part of their online life. 36 48% of the participants in the ECPAT Sweden study revealed that they had sent nude photos of themselves to others 36 and the majority of them do not find it to be problematic, even though they are aware of the risks, as it is a natural part of exploring their sexuality. 36 That said, and given that the proliferation of AI-driven CSAM is a very new phenomenon, relatively little is known about children and young people’s perception of sexual risks online.
Existing Legal and Regulatory Frameworks
India’s legal framework for protecting children from OCSA comprises of specific legislations: (a) The Protection of Children against Sexual Offences (POCSO) Act, 2012, a special law to protect children from CSA in its various forms, including the creation, possession and distribution of CSAM, and promote their rights during the trial of such offences. (b) The Information Technology (IT) (Amendment) Act, 2008 which builds on the IT Act, 2000 by identifying offences to which children are most vulnerable and the Information Technology (Intermediary Guidelines and Digital Medial Ethics Code) Rules, 2021 that has been aimed at curbing the circulation of CSAM on social media platforms. 38 (c) The erstwhile Indian Penal Code, 1860, along with the newly enacted Bharatiya Nyaya Sanhita, 2023 and the Immoral Traffic Prevention Act, 1956, which provide a basis for reporting offences such as sale and circulation of obscene materials, sexual harassment, defamation, criminal intimidation of children and online extortion and child trafficking. 38
Child pornography, referred to in the POCSO Act, 2012, has now been termed “Child Sexual Exploitation and Abuse Material” (CSEAM) by the Supreme Court of India, 39 and includes any visual depiction of sexually explicit content involving a child, including computer-driven image indistinguishable from an actual child, and any image created, adapted, or modified, but which appears to depict a child. 40 The Act punishes anyone who stores or possesses any CSEAM but fails to delete or destroy or report it or stores/possesses it for commercial purpose or transmits/propagates/displays/distributes it for non-commercial purpose. 40 The apex court has interpreted “possession” of CSEAM to mean “constructive possession” that goes beyond the traditional understanding of possession to include access and control over CSEAM, even when it is not stored or downloaded, as an individual is able to manipulate, alter, or delete CSEAM even if it is accessed temporarily.39,41
While there is no jurisprudence on AI-driven CSEAM in India, the existing law on CSA in India could potentially account for AI-driven CSEAM in the following ways:
It makes anyone who creates AI-driven CSEAM based on a real child liable to be punished for using a child for pornographic purposes (Section 14)
40
as CSEAM can be adapted or modified, as long as it appears to depict a child. Creating deepfakes of real children and/or using “nudifying” services online on images of real children can be interpreted as adapting or modifying images of a child to create CSEAM involving said child. Creating CSEAM, which has animated or non-realistic depictions of children, based on real children can also fall under this ambit as such depictions still appears to depict a child. It makes anyone who uses the internet to access CSEAM liable for storage of pornographic material involving a child (Section 15),
40
even if it exclusively involves AI-driven CSEAM, as long as the children depicted in these images are indistinguishable from actual children or they appear to depict a child. This can be inferred from the aforementioned ruling of the apex court, which interpreted “possession” of CSEAM to mean “constructive possession.”
Despite these possibilities, there remains considerable ambiguity in the applicability of the existing law to AI-driven CSEAM. First, while AI-driven CSEAM, not based on an actual child, could be interpreted to fall under the ambit of appears to depict a child, the definition of “child” as per the POCSO Act is any “person” below the age of eighteen. If CSEAM is not based on an actual person, there is ambiguity over whether it can be included under the definition of child pornography (or CSEAM) as per the Act. Such CSEAM can still cause harm by normalizing child abuse, 42 lowering inhibitions against abusing children, 43 fueling behavior sexualizing children and subsequently contributes to the growth of the child exploitation market. 44 Second, the term computer driven image is a restrictive term, as “computer-generation” is a process which requires direct human input, control and creativity, as opposed to AI, which is a technology which can create without human control. 45
The challenges of interpreting the law in relation to CSEAM is compounded by those pertaining to investigation and prosecution of CSEAM cases, which require significant resources, including specialized training for law enforcement officers and advanced technological tools, 46 as well as specialized police units. 47 India currently lacks adequate resources and capacity for the same. 48 Moreover, CSEAM, like other cybercrimes, have a cross-border nature, which complicates jurisdictional authority. Therefore, there is a need for India to be part of international treaties to address jurisdictional authority such as that pursued by the European Council 41 as well as creating legislative imperatives for tackling AI-driven CSEAM, such as those created by the European Union, 49 Singapore, China, and the United Kingdom in this regard. 50
Implications for Child and Adolescent Mental Health Interventions
CSAM, regardless of whether they are AI-driven or not, produces grave mental health impacts on children. Any child depicted in CSAM is a victim of sexual abuse. 46 Victims experience further and ongoing victimization, potentially worsening mental health consequences that may impact their adulthood as well. 46 These include constant feelings of guilt and shame, 51 humiliation, 52 fear of being discovered in public, 51 fear of others’ perception that they had participated willingly and mental health conditions like anxiety, depression, paranoia, sleeping problems, hypervigilance, suicidal ideation or attempts, other self-harm, low body image and sexual and relationship difficulties. 53 CSAM is also used to further groom and harass children, and consequently, their exposure to CSAM has several harmful effects like social maladjustment, psychological problems, violence, normalization of sexual pathology and changes to sexual behavior, along with the rise of child-on-child sexual assault. 54 Such concerns have implications for child mental health practice, ranging from routine screening for CSA, including OCSA, to curative interventions that lie at the intersection of sexual abuse and behavioral addictions; there is also a need for personal safety and prevention-focused programs to move beyond the in-person CSA issues to addressing online protection and safety.
AI-driven CSAM and OCSA create new imperatives for balancing the view of adolescent sexuality not being dangerous or pathological 55 vis-à-vis adolescent protection and safety, including mental health and well-being. This calls for the exploration of adolescent sexuality and risk behaviors on online platforms, particularly given the pervasive presence of adolescents in online spaces and their positive perceptions of exploring sex and sexuality in these spaces.
It is useful, in the current discussion, to highlight yet again the need for life-skills-based sexuality education programs. Successful programs for prevention of OCSA include both structural and skills components that need to be adopted. 56 These skills exist in the domain of life skills, with the WHO identifying problem-solving skills, assertiveness, resistance to peer pressure, empathy, perspective-taking, self-regulation, emotion management, impulse control, conflict resolution, etc., as being important components of intervention programs. 56 For mental health professionals working with children and adolescents and implementing life skills programs, it is important for them to be open and nonjudgmental in their decision-making on sexuality and relationship issues, ensure equal participation, use creative methods such as stories, narratives, film and other art forms, in a quest to enable children and adolescents to come to their own conclusions on the best course of action. 57 The idea of child engagement, recognizing that children understand and know what engages them, has been shown to be beneficial in all programs aimed at preventing OCSA. 58 Additionally, it is important to engage children and adolescents in using life skills in contexts which include the various facets of AI-driven CSA.
Finally, to avoid concluding on a note of despair: while AI poses a great number of challenges to the prevention of OCSA, it has also recently shown promise in its ability to predict and stop OCSA. 11 AI-driven technology has been utilized to monitor online activity, analyze datasets, provide early interventions and offer educational resources in the context of OCSA. 11 However, generalizability, lack of privacy and data safety are some of the concerns that have limited the applicability of these technologies to date. 11 Mental health experts have been shown to benefit from using Machine Learning, 59 such as determining which patient will benefit from a certain therapy 60 and the course of treatment in patients with depression. 61 Neurobiological, social and pathophysiological dynamics of mental disorders can be amalgamated using machine learning’s ability to collate large amounts of information and draw new frameworks for current situations. 62 More research, however, is required to understand the generalizability and effectiveness of such technologies.
