Abstract
This article introduces the concept of layered affordances to affordance theory, providing a framework for analyzing how digital platform affordances intersect and reinforce each other in facilitating technology-facilitated gender-based violence (TFGBV). Focusing on the #MeTooIndia movement and the Nth Room case in South Korea, we explore how multiplatform affordances, such as visibility, anonymity, and shareability, combine to create digital environments that enable the spread, reinforcement, and normalization of misogynistic narratives. In both cases, perpetrators strategically exploited layered affordances across platforms like X, Instagram, and Telegram to evade moderation, amplify harmful behaviors, and establish echo chambers of digital harm. This article argues that layered affordances reveal complex cross-platform dynamics that are crucial for understanding TFGBV and the ways in which digital harms are structured and sustained. Our findings highlight the need for nuanced, cross-platform governance policies that address the compounded nature of digital violence, particularly in non-Western contexts where marginalized communities are often most affected. By conceptualizing layered affordances, this study provides a crucial framework for analyzing the affordances that enable digital harms and for developing targeted interventions, thereby paving the way for more effective digital policy and platform design strategies.
Keywords
In 2018, #MeTooIndia movement gained momentum across digital platforms (Mathur, 2018) like X (previously Twitter) and Instagram where Indian women shared testimonies of gender violence, creating a shared space of resilience and solidarity. However, the movement’s rise also revealed the capacity of digital spaces to facilitate hostile dynamics across multiple platforms, enabling right-wing, heteropatriarchal forces to appropriate the #MeTooIndia hashtag for reactionary purposes. By flooding multiple social media platforms with anti-feminist and misogynistic rhetoric, these groups systematically co-opted the hashtag, undermining the movement’s intent and silencing survivors’ narratives.
Meanwhile in South Korea, between 2018 and 2020 (and still ongoing: e.g., C. Kim, 2025), chat rooms on Telegram, a messaging app launched by Russian-born Nikolai and Pavel Durov and now headquartered in Dubai, became hubs for distributing sexually violent content, including sex abuse videos of women and children. The perpetrators extended their sinister reach through not only Telegram but also X, Facebook, Discord, and cloud services like Google Drive and Mega Drive, exploiting their functionalities for trafficking, extortion, and distribution of abusive content. This mass digital sex trafficking case involved not just the digital sphere but escalated to physical sexual assaults, with at least 103 known victimized individuals, including 26 minors, suffering from relentless exploitation and abuse (Jang, 2022; Simons, 2022). The exploiters identified victimized individuals through X hashtags, initially used by children, women, and LGBTQIA+ individuals to find their community online.
The rise of digital platforms has redefined the ways in which individuals communicate, form communities, and mobilize around social issues (Boyd & Ellison, 2007; Papacharissi, 2015). At the same time, these platforms have also opened new avenues for harassment and violence, particularly against marginalized groups (Jane, 2016; Lewis et al., 2017). Research has shown that online spaces have provided those in power a space for community to share and promote hate, misogyny, xenophobia, and regionalism (J. Kim, 2018b; Um, 2016; Uzun & Tiryaki, 2024; Yun, 2013). These spaces have become spaces where already socially constructed collective discourses that resonate with the historical and contextual power imbalance are reproduced and regenerated (J. Kim, 2018b).
While significant research has explored platform affordances, much of it has focused on single-platform interactions and often in Western contexts. Existing studies tend to overlook the complex ways affordances interact across platforms and how these interactions affect marginalized communities in non-Western contexts, which we discuss in the literature review. This gap is particularly evident in studies addressing digital violence, where research often treats platform-specific affordances as isolated phenomena, overlooking how affordances from multiple platforms combine to create unique vulnerabilities.
This study aims to bridge these gaps by introducing the concept of layered affordances – the interactivity and interplay of affordances within and across platforms. We use this framework to analyze how affordances such as visibility, anonymity, and shareability, when layered across platforms, create complex dynamics that allow for the amplification of harm. Through a focus on India’s #MeToo movement and South Korea’s Nth Room case, this study examines how layered affordances across social media platforms facilitate violence against gendered minorities. Our research fills a critical gap in platform and gender studies in analyzing the interplay between affordances across multiple platforms, demonstrating how these interactions disproportionately impact marginalized communities in non-Western contexts and calling for more nuanced, context-specific platform governance policies.
The overarching research question of this study is: “How does the concept of layered affordances illuminate the ways in which social media features facilitate and amplify digital gender-based violence in non-Western contexts – particularly in India and South Korea – by enabling the circulation, reinforcement, and normalization of misogynistic narratives across interconnected digital ecosystems?” To answer this question, we first introduce layered affordances as a new concept to expand affordance theory, capturing the interactive potential of platform-specific affordances that emerge through cross-platform interactions. Then, we seek to showcase the concept’s applicability through two case studies, #MeTooIndia and the Nth Room case, to reveal how layered affordances exacerbate digital violence against gendered minorities in non-Western contexts.
The concept of layered affordances is highly applicable across contexts when studying digital platforms, capturing how their affordances interact not only with users and the environment but also with each other. In addition to introducing this novel concept, this study addresses the underexplored intersection of platform affordances and digital violence affecting gendered minorities outside the Western framework. We not only call attention to the need for more inclusive platform governance policies but also underscore the importance of designing digital spaces that protect vulnerable populations across diverse social and cultural environments.
Theoretical Framework
Social Media Affordances
Social media researchers have been the first to adopt and apply the concept of digital affordances, recognizing how these affordances shape interactions and behaviors on digital platforms. Early works by Kietzmann et al. (2011) and Treem and Leonardi (2012) identified foundational affordances like visibility, persistence, and editability, which remain central to understanding how social media enables specific social actions. For instance, the affordance of visibility has been linked to the ease with which information can be accessed and shared across platforms (Halpern & Gibbs, 2013; Leonardi, 2014; Vitak & Kim, 2014), facilitating social connections and information dissemination. Similarly, persistence—the durability of content on platforms—has been explored in contexts like Facebook, where it influences long-term knowledge sharing and social presence (Ellison et al., 2015; Treem and Leonardi, 2012). Researchers have also examined how affordances like editability and association shape self-presentation and network-building behaviors on social media (Bayer et al., 2016; Fox & Moreland, 2015; Treem and Leonardi, 2012). Scholars such as Faraj et al. (2011) and Majchrzak et al. (2013) have highlighted how affordances support collaboration and knowledge sharing in online communities, showing that digital affordances not only impact individual actions but also foster collective outcomes. Recent contributions have extended this focus, analyzing the affordances of newer platforms like Snapchat and YikYak, where ephemeral content creates distinct user experiences compared to more persistent platforms (Bayer et al., 2016). Altogether, these diverse studies underscore the critical role of affordances in mediating human-technology interactions, providing a robust framework for analyzing user behaviors across a range of digital environments.
Despite the dynamic and relational foundations laid by early affordance theory (Gaver, 1991; Gibson, 1979; Norman, 1988), much of the contemporary research on social media affordances has understandably gravitated toward more stabilized frameworks that treat affordances as fixed features tied to specific platforms—a move that has facilitated clearer classification and comparison, but one that also risks overlooking the fluid and context-dependent nature of affordances emphasized in earlier work. Karahanna et al. (2018), in their work on the needs-affordances-features perspective, exemplify this trend by attempting to generate a comprehensive list of social media affordances based on both prior research and an analysis of various platforms. Their study (particularly Supplemental Table D2) clearly demonstrates how social media affordances have been treated as discrete components linked to platform features. While this single-platform approach has helped consolidate the literature and make sense of complex user-platform interactions from the affordance perspective, this approach—like others in the same tradition—may inadvertently downplay the more fluid, context-dependent, and power-laden nature of affordances that earlier theorists emphasized.
Building on these important foundations, recent scholarship has sought to reintroduce a more dynamic and relational understanding of affordances. Noticeably, Willems’ (2021) concept of relational affordances addressed key limitations in existing affordance research, particularly the issues of platform-centrism and digital universalism. Rather than treating platforms as isolated or self-contained entities, Willems foregrounds the interactions between mobile social media, users, and their broader technological environments. This move enriches existing frameworks and points the field toward a more situated and globally aware understanding of digital affordances—an approach we aim to continue and expand through our multiplatform perspective.
Bringing the question of power to the forefront of social media affordance research, Schwartz and Neff’s (2019) concept of gendered affordances marks a significant step toward recognizing how affordances are not experienced uniformly but rather propose different possibilities for action depending on users’ social positioning—particularly their gender. Their work highlights how affordances are shaped by, and in turn reinforce, prevailing gendered cultural frameworks, thereby contributing to the reproduction of existing social inequalities. This concept has been applied to platforms like Telegram, where one-to-many broadcasting facilitates male homosocial bonds around the sharing of non-consensual intimate images (Díaz-Fernández & García-Mingo, 2022; Schwartz & Neff, 2019; Semenzin & Bainotti, 2020). Such affordances perpetuate environments where hegemonic masculinity and female objectification are normalized, while digital platforms also allow marginalized groups, such as women and sexual/gender minority individuals, to use specific features to protect against perceived risks in online interactions (Díaz-Fernández & García-Mingo, 2022; Semenzin & Bainotti, 2020). Thus, gendered affordances illustrate how digital media both perpetuate and mitigate cultural gender inequalities through platform-specific interactions.
Taken together, recent scholarship on social media affordances demonstrates the value of moving beyond fixed, one-size-fits-all conceptions of affordances toward a more situated, relational, and power-sensitive perspective. The field is at an important juncture where scholars are increasingly attuned to how affordances operate differently across user groups, technological environments, and sociocultural contexts. This study contributes to this emerging shift by foregrounding the multiplatform dynamics of affordances and their entanglement with structural power, offering a framework that further advances this more nuanced and critical direction in affordance scholarship.
Introducing a Much Needed Concept: Layered Affordances
Although scholars have long acknowledged the interconnected nature of social media affordances, the field still lacks a comprehensive theoretical framework that captures how these affordances interact across platforms in practice. Research on digital activism, for instance, highlights how activists leverage different affordances across platforms to amplify their messages, mobilize supporters, and coordinate actions, as seen in movements like #BlackLivesMatter, where X’s real-time updates, Instagram’s visual storytelling, and Facebook’s organizational tools work in tandem to enhance collective action (Freelon et al., 2016; Jackson et al., 2020). Similarly, studies on privacy and self-presentation reveal that users navigate context collapse and manage diverse audiences by curating their personas across platforms, using affordances like Instagram’s private stories, Snapchat’s ephemeral content, and Facebook’s controlled visibility options to tailor their disclosures for different groups (Marwick & boyd, 2014; Vitak, 2012). Gendered affordances also shape users’ experiences, as women and sexual/gender minority individuals often adjust their self-expression across platforms to balance authenticity with safety, reinforcing or subverting traditional gender norms depending on each platform’s unique features (Duffy & Hund, 2019; Tiidenberg & Cruz, 2015). Research on anonymity and content moderation further reveals that harmful practices like harassment are not confined to individual platforms but are enabled by affordances that span across them (Phillips, 2015; Semenzin & Bainotti, 2020). Taken together, this growing body of work signals an important shift in affordance scholarship: from viewing affordances as isolated and platform-bound to recognizing the interdependence of affordances across digital environments.
Building on recent efforts to recognize the dynamic nature of affordances and bring in the question of power to affordance theory (e.g., Schwartz & Neff, 2019; Willems, 2021), we introduce a new concept that we call layered affordances to capture the dynamic interplay of affordances across diverse platforms and contexts. Layered affordances are affordances that emerge through the interaction of social media affordances either within a single platform or across multiple platforms. These affordances are activated, amplified, or transformed when they intersect with other affordances, creating a layered effect that shapes user experience in complex ways. Rather than existing in isolation, layered affordances reflect the compounded effects of interacting features, emphasizing how different technological platforms and their affordances collectively influence and modify each other.
In some cases, these affordances may not even exist independently on one platform but only materialize when multiple platforms are used together. These interactions between platforms are what we refer to as multiplatform layered affordances, where affordances span across multiple platforms. Layered affordances capture the complexity of both intraplatform and multiplatform interactions, recognizing how users navigate through apps, social media, and devices that possess distinct yet interconnected affordances. This framework not only addresses how affordances operate within a platform but also how they combine and influence each other across platforms, filling the gap left by existing affordance theories that tend to analyze affordances in isolation.
Technology-Facilitated Gender-Based Violence
In this article, we demonstrate the usefulness of layered affordances as a concept in studying multifaceted technology-facilitated gender-based violence (TFGBV) through two case studies. In both the Indian and South Korean cases, perpetrators strategically exploit these layered affordances, both within single platforms and across multiple platforms, to construct networked environments where misogyny, sextortion, and trolling can thrive. By using affordances across different platforms in concert, they create complex ecosystems that multiply the impact of these affordances in ways that would not be possible within a single platform alone.
TFGBV encompasses a wide range of harmful behaviors on digital platforms, which can occur simultaneously or as an extension of offline violence (Posetti et al., 2021). Common manifestations of TFGBV include cyberbullying, defamation, sexual harassment, image-based abuse, cyberstalking, doxxing, impersonation, hate speech, and gender-trolling (Backe et al., 2018; Dunn, 2020; Henry et al., 2020; Hinson et al., 2018). These tactics are enabled by the affordances of digital technologies, which allow perpetrators to exploit anonymity and scale harm quickly. Digital platforms enable behaviors like monitoring, recording, and disseminating intimate information, often without the victim’s consent (Dunn, 2020). Moreover, the scale and speed at which digital media can spread harm compounds the impact on victims, particularly when platforms fail to implement adequate content moderation or accountability measures (Barter & Koulu, 2021).
Most importantly, TFGBV is not merely a reflection of individual bad actors but also a product of the broader sociotechnical systems that fail to protect vulnerable groups. In regions where gender-based violence is normalized, digital platforms provide new spaces for such violence to thrive, with little recourse for victims (Sheikh & Rogers, 2024; UN Women, 2024). The integration of digital platforms into everyday life has thus created new avenues for harm that traditional legal and social frameworks are ill-equipped to address.
Exemplifying the sociotechnical nature of TFGBV, women and marginalized communities are disproportionately affected by online violence (Afrouz, 2023; Duggan, 2017; Lenhart et al., 2016; Plan International, 2020). Research highlights that TFGBV is deeply connected to ingrained gender norms and systemic inequalities, with significant overlaps involving race, caste, religion, and gender (Barter & Koulu, 2021; Dunn, 2020; Henry et al., 2020). This intersectional nature of TFGBV underscores the importance of adopting nuanced approaches to understanding and addressing online violence.
Layered affordances offer a novel theoretical lens that extends beyond traditional digital harm frameworks, revealing how digital platforms not only enable but actively structure the conditions for gender-based violence. In particular, this layered approach shifts our focus from isolated digital harms to a more interconnected understanding of platform infrastructures, where affordances accumulate, overlap, and reinforce each other in ways that uniquely enable and amplify gendered violence. Our framework demonstrates that platform interactions, rather than being incidental, are central to the persistence and escalation of TFGBV.
Research has demonstrated a direct correlation between particular platform affordances and an increase in online violence. The absence of effective reporting mechanisms and the vague nature of community guidelines frequently leave women vulnerable to targeted harassment (Bhatia et al., 2022; Rajani, 2022). Moreover, the anonymity provided by these platforms and the capability to form online communities can foster echo chambers where misogynistic attitudes become normalized. Such conditions embolden perpetrators to engage in harassment without fear of accountability. As a result, individuals often exploit these design elements to perpetuate their harmful actions.
Reidl et al. (2024) further develop the notion of infrastructural platform violence, which highlights how violent acts are sustained by the design choices made by digital platforms. A case in point is Facebook’s Memories feature, which can unintentionally retraumatize survivors of sexual violence by showing them images of their abusers, “even without active use by the abuser” (Little, 2023, p. 17). This exemplifies how platform violence is not merely an issue of individual offenders but rather a structural problem within the broader context of platform infrastructure.
As Hemphill (2018) notes, it is vital to recognize that the affordances of platforms enable harmful behaviors, and these behaviors can migrate across platforms or manifest simultaneously in various online spaces. In addition, negative actions can be contagious (Cheng et al., 2017; Philips & Milner, 2018), allowing them to proliferate rapidly. Importantly, the features of these platforms have the capacity to both silence and empower different groups concurrently. The systematic devaluation of marginalized voices and their experiences occurs in both online and offline contexts. To effectively address harassment, it is imperative to focus on the underlying structures and values that allow these issues to persist.
In this context, the concept of layered affordances provides a nuanced approach to studying platform violence on social media, which we would show through two case studies. More specifically, we show how the concept captures the multifaceted nature of digital interactions, the role of platform design in shaping user behavior, and the impact of systemic issues on marginalized communities, especially gender minorities. Thus, we suggest the following research question:
How does the concept of layered affordances help us understand how social media affordances facilitate and amplify digital gender-based violence in non-Western contexts, particularly in India and South Korea, by enabling the spread, reinforcement, and normalization of misogynistic narratives across digital ecosystems?
Method
This study examines the concept of layered affordances and the role of networked ecology in facilitating digital violence by drawing on two national case studies: South Korea and India. These cases highlight how technological affordances, with their layered nature, contribute to the spread of online harm and marginalization, exploiting gaps in platform governance and policies that prioritize Western language, culture, and harms.
Asia as a Method
We draw from two instances of affordance activation in separate Asian contexts, India and South Korea. Each case examines a distinct information environment and platform, shaped by its own network of perpetrators who strategically exploited platform affordances in interconnected ways to inflict harm. They provide knowledge that is tailored to a specific Asian context, going beyond statistical generalizations, offering researchers logical generalizations. In particular, a descriptive case study becomes especially valuable when the research goal is to understand the dynamics of complicated phenomena (Yin, 2014), which is particularly relevant in the fast-evolving digital media landscape where diverse users with distinct cultural backgrounds, historical perspectives, and intentions contribute, and these factors are culture-specific.
Using “Asia as method” 1 for our study on digital violence against women in India and South Korea allows us to frame our research by critically centering local histories, cultures, and experiences rather than relying solely on Western theoretical frameworks. This methodology encourages us to view digital violence within these countries not just as a reflection of global or Western patterns but as phenomena shaped by specific social, political, and cultural contexts in Asia. Please refer to Supplementary File A for details about why we choose India and South Korea as our two cases for this study and Supplementary File B for the historical backgrounds of the cases.
Data Collection and Collaborative Affordance Mapping
Data collection for the Indian Case
The dataset included 8000 tweets collected using X API and 1000 Instagram posts using manual screenshotting with key hashtags for the movement used as filters. This included the central hashtag #MeTooIndia as well as others that infiltrated the movement online such as #MenToo and #FeminismIsCancer. These hashtags were appropriated by far-right groups to disrupt feminist discourse, exclude feminist voices on public platforms, and engage in digital misogyny against Indian women. The tweets and Instagram posts were collected between 2018 and 2020 when the movement was at its peak in India, and the collected dataset was in English as well as two other Indian languages, Hindi and Tamil. The intention behind the collection of data in different languages was to capture the diverse range of narratives around #MeTooIndia and to undertake a qualitative analysis of how discourse in different languages on social media can create specific affordances for the dissemination of misogyny. To protect the privacy of individuals in our dataset, all non-public Twitter handles and usernames have been anonymized following ethical guidelines from the Association of Internet Researchers (2019). This included altering usernames, blurring profile details, and modifying tweet language slightly to prevent direct traceability. Only names of public figures and organizations have been retained as originally posted.
Data Collection for the South Korean Case
Unlike the Indian case, which relied on social media posts as primary data to analyze the real-time discourse of participants in the #MeTooIndia movement, the South Korean case draws primarily from news articles and investigative journalism for both ethical and methodological reasons. The Nth Room case involved the circulation of sexually exploitative content, including images and videos of minors, which raised serious ethical and legal concerns about collecting and analyzing primary materials shared by perpetrators on encrypted platforms such as Telegram. Access to these platforms is tightly restricted, and engaging directly with such material would risk retraumatizing survivors and potentially reproducing harm.
Instead, news articles and investigative journalism—particularly from credible outlets like Hankyoreh and the work of Team Flame—offered a comprehensive, ethically vetted, and survivor-centered account of how perpetrators operated across platforms. These journalistic sources provided in-depth documentation of the digital infrastructure, tactics, and layered affordances exploited by perpetrators, alongside expert commentary and survivor testimonies that would otherwise be inaccessible to academic researchers. By using curated and context-rich media coverage, we were able to reconstruct the layered affordance dynamics with rigor and sensitivity, ensuring that our analysis upheld both methodological robustness and ethical responsibility.
Hence, the dataset for the South Korean case comprised 29,066 news articles about the Nth Room case, sourced from Bigkinds, a news archive managed by the Korea Press Foundation, which aggregates content from around 54 national and local South Korean media outlets (Bigkinds, n.d.). Articles were retrieved using a search string that included case-specific terms, including “Nth Room” and “Baksa Room,” along with the names and aliases of the identified perpetrators. A researcher originally collected this dataset for a separate project, closely tracking media coverage since the case’s initial report in August 2019. In addition, the investigative X account of Team Flame was monitored to gain behind-the-scenes insights, and the Hankyoreh digital archive, “Beyond N,” was used to provide further context, offering documentation and analysis of digital sexual crimes in South Korea to support victims and advocate for systemic change. Launched by a major South Korean news outlet, Hankyoreh21, the “Beyond N” project provides a comprehensive overview of these cases, including detailed accounts of victims’ experiences and the justice system’s response. The archive includes sections on patterns of criminal organization, court rulings analysis, history of solidarity movements, and educational resources. It aims to preserve records of digital sexual violence and tracks progress in transforming a culture that once tolerated such abuse into one of solidarity and support for victims. Through articles, analyses, and historical documentation, the archive encourages ongoing awareness, support, and systemic change to combat digital violence against marginalized individuals.
Collaborative Affordance Mapping
To identify overlapping affordances between the Indian #MeTooIndia and South Korean Nth Room cases, we employed a qualitative, researcher-driven approach that we developed for this study, which we call Collaborative Affordance Mapping. This approach draws from established principles of thematic analysis (Saldaña, 2016) and comparative case study research (Stake, 2006; Yin, 2014), adapted to analyze the sociotechnical affordances of digital platforms in the context of gender-based violence. Over the course of a year, the two researchers engaged in iterative coding and reflexive dialogue, examining how core affordances—such as visibility, anonymity, and shareability—functioned within and across platforms (e.g., X, Instagram, Telegram) in each national context. In these discussions, they compared how these affordances functioned within each platform (e.g., X, Telegram) and explored how perpetrators used these affordances to exploit platform features and evade detection. Through reflective conversation and collaborative analysis, the researchers identified patterns, contradictions, and contextual nuances, documenting points of convergence where affordances facilitated similar abuses across both cases.
Results
In this section, we present the findings of our Collaborative Affordance Mapping of the Indian #MeTooIndia and South Korean Nth Room cases. Using an iterative, comparative thematic analysis of our two cases to map affordances, we identified recurring patterns in how perpetrators exploited platform affordances to enact and sustain gender-based violence. Rather than examining individual platforms in isolation, we foreground how affordances interact across platforms in layered and dynamic ways—producing complex environments of harm. Our analysis revealed five key themes: (1) hashtag hijacking through searchability and visibility; (2) digital misogyny embedded in humor and meme culture; (3) selective visibility and anonymity; (4) the construction of echo chambers across platforms; and (5) the compounding effects of historical and social power imbalances within these digital infrastructures. Each theme is examined through a comparative lens to highlight convergences and divergences in how platform affordances were strategically leveraged in both Asian contexts.
Hashtags Afford Hijacking Through Searchability and Visibility
In both cases, hashtags were a powerful tool for organizing and amplifying conversations on social media platforms, but their visibility and searchability also made them vulnerable to hijacking. Perpetrators exploited these affordances to shift narratives, manipulate discourse, and, in some cases, coordinate harmful activities. The ability to hijack a hashtag stemmed from its dual role as both a tool for organizing collective discourse and as a public, searchable entity that could be co-opted by any user or group with an agenda, regardless of the hashtag’s original intent.
In India, far-right groups hijacked hashtags like #MeTooIndia to undermine feminist efforts. By attaching anti-feminist rhetoric such as #MenToo and #FeminismIsCancer to the original movement, these groups reframed a discourse meant to highlight sexual violence into one that falsely portrayed men as victims of fabricated accusations. The following tweets in English and Hindi (Figure 1) illustrate the deliberate de-centering of women’s stories and experiences in sexual abuse, the re-centering of male “oppression” (Boyle & Rathnayake, 2019) that occurs through the engagement with hashtags such as #FeminismisCancer, and #MenToo within the #MeTooIndia platform.

Tweets from India’s men’s rights movement that co-opt #MeTooIndia.
This deliberate hijacking of a feminist hashtag demonstrated how searchability and visibility—two key affordances of hashtags—were weaponized to shift public focus and dilute the power of activist movements. What began as a platform for survivors to share their stories became a battleground where the narrative was forcefully reshaped by anti-feminist groups, contributing to the silencing and invalidation of women’s voices.
Similarly, in South Korea’s Nth Room case, perpetrators used hashtags on platforms like X to identify and target victimized individuals. Two hashtags used by the perpetrators were “섹트” (sext), a combination of “sex” and “Twitter,” and “일탈” (il-tal), meaning “deviation.” These hashtags had been used by X users to express their sexuality and engage in sexual practices on the platform, often anonymously, through sexually explicit images or posts of their bodies (M. H. Kim, 2022). While users found community and formed a subculture of empowerment through sexual expression with these hashtags, they were also exploited by perpetrators to identify and reach out to them. News coverage reported that the hashtag “sext,” in combination with “고딩” (go-ding), a slang term for high schooler, had a high correlation of co-occurrence (Bigta News, 2020), which was also how perpetrators identified vulnerable minors. Hashtags made it easier for malicious actors to search for and aggregate posts that contained personal or compromising information. In this case, the hashtag’s affordances of searchability and public visibility were used to facilitate exploitation across platforms, turning an organizing tool into an entry point for abuse. Once identified, these individuals were invited over to other platforms, such as Telegram and Discord, where the anonymity provided by encrypted chat rooms allowed perpetrators to continue their exploitation with little fear of repercussions.
In both cases, historical imbalances among users, rooted in long-standing societal injustices, played a critical role in how these hashtags were hijacked. In India, the far-right’s appropriation of feminist hashtags reflected deeper patriarchal structures embedded in Indian society, where the voices of women—particularly those from marginalized groups—have historically been suppressed and excluded (Nanditha, 2022). The unequal power dynamics, intensified by caste, class, and gender hierarchies, enabled dominant groups to hijack the hashtag, making it easier for their narrative to overpower that of the feminist movement. Similarly, in South Korea, the Nth Room case highlighted the ongoing gender-based violence that thrives within a society deeply marked by patriarchal norms, where women and gender minorities are disproportionately harassed and abused (J. Kim, 2018a, 2022; Jung, 2023). While social media platforms like X have provided those who are women or gender minorities a space for expression and community, the same affordances also provide perpetrators to identify and target these communities more easily. The hijacking of hashtags in these contexts is not merely a matter of digital manipulation but reflects entrenched power imbalances that are deeply embedded in the social fabric (See Rezwana & Pain, 2023 for more insights into the layered nature of gender-based violence).
Humor and Meme Culture: Digital Misogyny Across Platforms
Humor and meme culture are particularly potent tools for spreading digital misogyny (Borgeson & Valeri, 2004; Greene, 2019; Nasreen, 2021; Schmid et al., 2024), as they allow harmful ideologies to be disguised as lighthearted content. Through their use of humor, perpetrators trivialize serious issues like sexual violence and feminist movements, framing their messages in a way that invites engagement while downplaying the harm being done. These cultural phenomena rely on platform affordances such as shareability, visibility, and anonymity, which make it easier to disseminate toxic humor across multiple platforms. The concept of layered affordances, where affordances from different platforms interact, plays a critical role in this process by enabling the seamless circulation of memes and jokes, deepening existing biases, and creating tightly knit communities where anti-feminist ideologies thrive.
In both the Indian and South Korean cases, humor and meme culture became central mechanisms for perpetuating misogyny and creating insular communities that reinforced harmful narratives. In the Indian case, memes and jokes mocking feminists were widely circulated on platforms such as X, Instagram, and WhatsApp. These jokes, often in regional languages like Hindi and Tamil, typically trivialized the experiences of women involved in the #MeTooIndia movement, depicting them as liars, opportunists, or participants in a “feminist conspiracy.” Memes framed as jokes allowed perpetrators to dismiss the feminist movement as exaggerated or fraudulent (Aiston, 2023; Hodapp, 2017; Krendel, 2020), co-opting humor to undermine its legitimacy. For instance, see Figure 2:

Instagram meme—Humor around #MeTooIndia.
Multiplatform layered affordances facilitated the rapid spread of these jokes. What began on public-facing platforms like X or Instagram was quickly shared on private messaging apps like WhatsApp, allowing the content to move fluidly across different digital spaces. This cross-platform circulation allowed the jokes to reach wider audiences, making misogynistic narratives more widespread and harder to counter. The use of regional languages intensified the problem, as the content often bypassed platform moderators, thriving in isolated, self-reinforcing communities that became echo chambers for anti-feminist rhetoric.
In South Korea, humor and meme culture also played a significant role in amplifying misogyny, particularly during the Nth Room case. Perpetrators operating in private, encrypted Telegram chat rooms used the platform’s custom sticker feature to objectify and dehumanize their victims (The Telegram Team, 2024), creating stickers and memes that turned the suffering of victims into entertainment (Park, 2020). These stickers, featuring the faces and naked bodies of victimized individuals, were created and shared among the perpetrators, fostering a shared sense of humor around misogyny and abuse. The shareability and visibility of these DIY stickers supported humor and memes within the perpetrators’ community, while these affordances harmed victimized individuals with shame, violence, and abuse. Anonymity was only granted for the perpetrators—victimized individuals had none. This form of digital misogyny allowed perpetrators to distance themselves from the gravity of their actions, framing abuse as “just a joke” or a form of dark humor.
As these “memes” circulated across platforms—from the private anonymity of Telegram to the more public visibility of social media sites to gather more perpetrators to their “community,” promoting sexual abuse as something fun and humorous. Several news articles we examined reported how the pain of the victimized individuals was regarded as “entertainment” by the perpetrators—something fun, amusing, and a source of jokes (e.g., Kim & Yoon, 2020; Lee, 2020). This trivialization of abuse normalized harmful behavior and made it easier for perpetrators to continue exploiting their victims with little to no accountability. The desensitization also facilitated the recruitment of new participants into these toxic communities, where humor acted as a gateway to more explicit and dehumanizing content.
Thus, the intersection of humor, layered affordances, and historical power imbalances highlights the complex dynamics that allowed misogyny to thrive in digital spaces. Humor and meme culture, while often dismissed as harmless, played a central role in facilitating digital misogyny in these two cases across platforms.
Selective Visibility and Anonymity
As briefly mentioned above, a critical dimension of multiplatform layered affordances is how visibility and anonymity are selectively granted, depending on the user’s role and position within the digital ecosystem of violence. Like in the case of the effect of humor and memes, these affordances were not evenly distributed; instead, they reflect and reinforce existing power imbalances, shaping how individuals engage with digital platforms. Perpetrators are often able to exploit these affordances in ways that protect them, while their victims remain vulnerable, lacking the same advantages of anonymity or visibility to seek help or protection.
In the Indian case, the #MeTooIndia movement revealed how selective visibility operates within digital spaces. In the dataset collected from X and Instagram, around 54% of tweets and posts in regional languages were misogynistic in nature (Figure 3). All the tweets carried the hashtags #MeTooIndia along with others like #MenTooIndia that made it easier for shareability and visibility of their discourse. This means that tweets in regional languages like Hindi and Tamil, which carried anti-feminist and misogynistic discourse, were often less visible to global audiences and platform moderators than English-language tweets. This selective visibility and anonymity allowed harmful actors to operate within a parallel digital sphere, relatively hidden from the scrutiny of both the broader public and platform moderation systems. The regional-language tweets, which included trolling, harassment, and mockery of women who participated in the #MeTooIndia movement, thrived in an environment that was semi-private due to the linguistic barriers. The perpetrators could engage in harmful behaviors within these regional digital enclaves, facing little oversight while still participating in global conversations. This dynamic created an insulated space where misogynistic rhetoric could flourish without the consequences that would likely arise in more visible, English-language discourse.

Fifty-four percent misogynistic tweets in regional languages around #MeTooIndia.
Selective anonymity played a similarly vital role in the Nth Room case as well. Perpetrators used platforms like Telegram, which offer strong encryption and privacy features, to create chatrooms where they shared explicit content with minimal risk of exposure. This form of selective anonymity gave perpetrators a high degree of freedom to continue exploiting their victims without fear of legal repercussions or public condemnation. Investigative reports on these Telegram chatrooms have reported how confident perpetrators were in avoiding law enforcement getting at them (e.g., Kookmin Ilbo Special Investigative Team, 2020). Telegram’s encryption, designed to offer users privacy and security, became a shield for perpetrators to hide behind, enabling the continued circulation of abusive content with minimal risk. At the same time, the anonymity of the perpetrators left the victimized individuals unable to access their identities, leaving them vulnerable to believing whatever the perpetrators told them without being able to verify whether those claims were true (CBS NoCut News, 2020).
Furthermore, the circulation of memes in regional languages such as Hindi, Tamil, and South Korean across platforms—particularly from X and Instagram to channels like WhatsApp and Telegram—reveals a significant affordance dynamic. Here, language itself becomes a strategic affordance, shaping visibility and moderation in layered and uneven ways. Regional-language memes often slipped through the cracks of platform governance, as most content moderation infrastructures remain disproportionately focused on English-language discourse. When these abusive images and videos circulated across multiple platforms, victimized individuals found themselves trapped in echo chambers of digital harm, with little to no platform support in non-Western languages (see Supplementary File C for an example). Compounding this issue, the combination of language-based opacity, shareability, and cross-platform mobility enabled misogynistic content to travel fluidly between public and private spheres—largely unchecked and unmoderated.
In both cases, the selective nature of these affordances significantly amplified the power imbalance between perpetrators and victims. Language barriers to moderation made this worse. Perpetrators strategically use the selective visibility of their actions, often operating within digital spaces that offer them relative invisibility from larger public scrutiny. In regional-language spaces or encrypted platforms, they are able to engage in harmful actions without significant oversight, ensuring that their activities are kept hidden from those who might intervene. On the other hand, victims are denied the protective anonymity or selective visibility that might offer them some degree of safety. Instead, they are often thrust into highly visible spaces where they become easy targets for public shaming, harassment, and further exploitation.
Echo Chambers of Digital Harm in Multiplatform Space
The combination of hashtags, humor, memes, selective visibility, and anonymity created echo chambers of digital harm in multiplatform spaces—closed digital spaces where harmful ideologies are reinforced, circulated, and amplified among like-minded individuals. These echo chambers thrived on multiplatform layered affordances, where users move seamlessly between platforms, exploiting different features to hide their activities, expand their reach, and evade moderation.
The creation of echo chambers of digital harm in both cases resulted directly from the interaction between multiplatform layered affordances, which itself becomes an emergent affordance—one that can only be understood through the very concept of multiplatform layering. The combination of affordances, when spread across platforms, enables the growth of these toxic communities. As users move between platforms, they are able to exploit the affordances of each to their advantage: using hashtags to gain visibility, memes and humor to reinforce shared ideologies, encryption to protect their activities from external oversight, and cloud storage links to bypass content moderation systems. These affordances, when layered together, create a powerful network of digital spaces where harmful narratives can circulate, unchecked and unchallenged, leading to the deep entrenchment of misogyny and violence.
In both the #MeTooIndia and Nth Room cases, the multiplatform nature of layered affordances played a central role in the development and sustainability of these echo chambers. Perpetrators did not operate solely on one platform; instead, they moved fluidly between public-facing platforms and more private or encrypted spaces. In India, for example, far-right groups used X to gain visibility through hashtags, while simultaneously sharing misogynistic memes and jokes on Instagram and WhatsApp, where the content became less visible to outsiders. These private groups on WhatsApp acted as echo chambers, where members could share content that reinforced their anti-feminist beliefs without fear of moderation or public backlash. Similarly, in South Korea, perpetrators used public platforms like X to identify victims through hashtags and then shifted to encrypted spaces on Telegram, where they could continue their abuse in secret. The addition of cloud storage platforms like Google Drive and MegaDrive added another layer to this ecosystem, allowing perpetrators to easily store and share explicit content without risking platform moderation.
The interplay between public visibility, private anonymity, and shareability across platforms enabled these echo chambers to grow unchecked. Public platforms like X provided perpetrators with the visibility they needed to spread their harmful ideologies and identify new targets, while private platforms like Telegram and cloud services offered the anonymity and encryption necessary to hide their activities and protect themselves from external scrutiny. This orchestration of multiplatform affordances created a layered environment where echo chambers of hate could thrive, as participants were able to move between visibility and invisibility, shareability and secrecy, depending on their needs.
Furthermore, these echo chambers were reinforced by the selective moderation practices of platforms, particularly in non-Western contexts. In India, for instance, misogynistic content shared in Hindi and Tamil was less likely to be flagged or removed due to the lack of robust moderation for regional languages. This selective visibility allowed regional-language echo chambers to flourish, as the content they shared remained largely invisible to platform moderators focused on English-language discourse. In South Korea, the encrypted nature of Telegram and the use of cloud storage links made it nearly impossible for law enforcement or platform moderators to intervene in the Nth Room case, allowing the perpetrators to continue operating within their hidden echo chambers for an extended period of time.
The echo chambers of hate in both the #MeTooIndia and Nth Room cases exemplify how multiplatform layered affordances can be exploited to create closed, self-reinforcing digital spaces where harmful ideologies thrive. (Figure 4) By using the affordances of visibility, anonymity, shareability, and the use of cloud storage services across multiple platforms, perpetrators were able to form tightly knit communities where misogyny, abuse, and violence were normalized and amplified. The orchestration of these affordances across platforms highlights the need for a more integrated approach to platform governance, one that considers the complex interplay of affordances in sustaining harmful digital ecosystems.

Venn diagram of layered affordances leading to echo chambers of digital harm.
Conclusion
In this article, we introduce the concept of layered affordances to extend affordance theory, demonstrating how echo chambers of digital harm themselves constitute a multiplatform layered affordance. While this concept has broad applicability across social media and platform research—especially in examining the complex interactions among platforms, users, and sociotechnical environments—we highlight its particular value for studying digital harms. Through our case studies, we illustrate how layered affordances become particularly salient in contexts where perpetrators exploit multiple platforms and historical power imbalances exist between user communities.
Throughout our case studies, we demonstrated how echo chambers of digital harm in multiplatform spaces arise from the interaction of multiple affordances across platforms. Previous literature on digital activism (Freelon et al., 2016; Jackson et al., 2020) has shown that multiplatform affordances allow movements to coordinate and mobilize support across distinct digital environments. Extending these findings, we observe that harmful actors exploit similar cross-platform affordances to reinforce harmful behaviors across networks. The #MeTooIndia and Nth Room cases illustrate that digital abuse and hate are not restricted to the affordances of a single platform; rather, they emerge from the intricate interplay of affordances spanning multiple platforms. By examining affordances across platforms, we show how distinct functionalities combine to perpetuate online abuse—a contribution that builds upon and complicates existing research on platform-specific affordances in online social dynamics.
Each platform contributes unique affordances that, when layered together, create a potent mechanism for spreading and reinforcing harmful behaviors. In the #MeTooIndia case, the layered affordances spanned platforms like X and Instagram. Each platform provided distinct functionalities that perpetrators utilized to build and sustain their misogynistic narratives. On X, the visibility affordance of hashtags like #MeTooIndia allowed far-right groups to hijack the conversation and insert anti-feminist rhetoric through hashtags like #MenToo and #FeminismIsCancer. These findings resonate with research on networked privacy (Marwick & boyd, 2014; Vitak, 2012), where users manage public and private personas across platforms. Here, however, we observe that harmful actors exploit visibility affordances to broadcast their messages widely and recruit supporters through networked public spaces, especially using regional-language content that circulates beyond English-language moderators’ oversight.
As these conversations gained momentum on X, perpetrators shifted the discourse to more private platforms such as Instagram, where they could engage in smaller, insular groups shielded from public scrutiny (Marwick & boyd, 2014; Vitak, 2012). The shareability affordance of Instagram enabled these users to spread memes, jokes, and anti-feminist content in a more protected space, outside the visibility of the public sphere. These private, encrypted spaces acted as echo chambers where the ideas fostered on more visible platforms like X were further solidified and reinforced. Thus, as previous literature highlights, platforms offer distinct affordances that, when layered, allow users to fluidly navigate between public and private realms (Freelon et al., 2016). By synthesizing these insights, our framework of layered affordances shows how actors reinforce harmful behaviors by shifting between public visibility and private anonymity, evading broader scrutiny.
Similarly, in the Nth Room case, perpetrators used multiple platforms to create an elaborate network of abuse that spanned public and private digital spaces (Phillips, 2015; Semenzin & Bainotti, 2020). Research on harassment and content moderation (Duffy & Hund, 2019; Semenzin & Bainotti, 2020) suggests that digital affordances can enable harmful behavior, particularly when anonymity and visibility intersect. Building on this work, our study reveals how perpetrators utilized X’s visibility affordance to locate and target victims, while exploiting Telegram’s anonymity and encryption affordances to continue their abuse with minimal risk of exposure. The multiplatform nature of these affordances adds nuance to current discussions, as it demonstrates that harmful interactions are sustained not by a single platform but by the cross-platform dynamics of affordances.
Telegram’s custom sticker feature, which facilitated objectification and dehumanization through visual humor, is another example of layered affordances (Díaz-Fernández & García-Mingo, 2022). Meanwhile, the shareability affordances of cloud storage services like Google Drive and MegaDrive enabled perpetrators to circulate explicit content without detection, using link-sharing mechanisms to bypass direct content moderation (Schwartz & Neff, 2019).
Our findings also suggest that language itself can act as a strategic affordance—when layered with visibility and mobility, regional languages enable harmful content to move fluidly between public and private platforms while evading moderation. This highlights the sociotechnical specificity of layered affordances in contexts shaped by language, geography, and power. This multiplatform ecosystem exemplifies how specific affordances, when layered, foster environments where abuse escalates unchecked by moderators or law enforcement. By situating these findings within previous scholarship on affordances and digital harms, we show how layered affordances amplify harmful behaviors, with interactions that cross public and private spheres and evade conventional oversight mechanisms.
Through these two case studies that demonstrate orchestrated the use of affordances to create echo chambers of digital harm, we show the usefulness of the concept of layered affordances. Specifically, we propose layered affordances as a new theoretical framework to understand and identify and create tangible solutions for TFGBV and other gender-based harms across multiple platforms. Layered affordances advance affordance theory by highlighting that capabilities like visibility, anonymity, and persistence are not isolated but interact in complex ways, creating new dynamics of power and control across platforms (Phillips, 2015; Semenzin & Bainotti, 2020). This interconnectedness is crucial in studying digital harms, where malicious actors exploit these interactions to bypass platform restrictions.
Today’s users rarely limit themselves to one platform, instead navigating seamlessly across networks. Layered affordances illustrate how platform-specific features intersect—for example, X’s visibility combined with Telegram’s anonymity—to amplify harmful behaviors such as harassment. By capturing these cross-platform dynamics, layered affordances offer a more accurate framework for understanding online interactions.
Layered affordances show how affordance interactions can worsen existing power imbalances, enabling powerful actors to manipulate affordances to reinforce digital inequalities. This perspective complements prior work on gendered affordances by Schwartz and Neff (2019), which focused on single-platform affordances reinforcing gender norms. Our study demonstrates that the combined affordances across platforms not only perpetuate but intensify these inequalities, especially in cases where marginalized users face compounding vulnerabilities across platforms. By framing these interactions as layered affordances, we underscore the importance of informed platform governance to mitigate these abuses.
From a platform design and policy perspective, layered affordances offer critical insights. The multiplatform nature of layered affordances presents challenges for content moderation. As perpetrators exploit affordances across platforms, each platform’s isolated content moderation strategies struggle to address the scope of abuse. For example, visibility on X may allow misogynistic content to spread, but once the conversation shifts to encrypted platforms like WhatsApp or Telegram, the content becomes difficult to track. This layered orchestration of affordances underscores the need for cross-platform cooperation among moderators and law enforcement to address harmful behaviors across the broader digital ecosystem.
Future research can build on the concept of layered affordances to develop theories that map the intersection of digital platform features and gender based and other forms of violence. By examining how affordances interact across platforms, scholars can uncover patterns of harm that traditional models overlook. This approach can guide the design of more effective governance frameworks, targeting specific platform affordances that enable TFGBV. In addition, using layered affordances can inform policy solutions that account for the complex dynamics of digital spaces, leading to tangible interventions such as platform design changes, user behavior regulations, and proactive content moderation strategies to reduce gendered harm online.
Supplemental Material
sj-docx-1-sms-10.1177_20563051251383528 – Supplemental material for Echo Chambers of Digital Harm: Insights Into Layered Affordances From India and South Korea
Supplemental material, sj-docx-1-sms-10.1177_20563051251383528 for Echo Chambers of Digital Harm: Insights Into Layered Affordances From India and South Korea by Heesoo Jang and Narayanamoorthy Nanditha in Social Media + Society
Footnotes
Acknowledgements
We would like to thank Dr. Daniel Kreiss for his invaluable feedback on an earlier version of this manuscript. We are also grateful to the editorial team and the anonymous reviewers for their thoughtful comments and suggestions, which greatly strengthened this work. In addition, Heesoo Jang would like to thank Team Flare for their investigative journalism and for the valuable interactions during the early stages of this project.
Funding
The authors received no external financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Supplemental material
Supplemental material for this article is available online.
Notes
Author biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
