Abstract
Technology-facilitated abuse and violence disproportionately affect marginalized people. While researchers have explored this issue in the context of public-facing social media platforms, less is known about how it plays out on more private messaging apps. This study draws on in-depth interviews with women and queer journalists and activists in Lebanon to illustrate their experiences of infrastructural platform violence on WhatsApp. Specifically, we distinguish between identity-based violence propagated on platforms, and violence propagated by platforms due to infrastructural neglect of vulnerable populations. Our results document how perpetrators employ the affordances of WhatsApp in harmful ways. We highlight the individual emotional and reputational toll of doxxing and harassment campaigns. The study also showcases the societal ramifications of silencing and self-censorship, as well as infrastructural platform failures. Findings underscore the need to shift attention in platform studies toward populations and geographies whose safety has systemically been neglected by technology companies.
Keywords
Introduction
Digital platforms are not equally safe for all users. People “who do not identify as men and who, in addition, hail from diverse racial and LGBTQ+ communities” (Little, 2023: 4) disproportionately become targets of technology-facilitated, often identity-based, violence online (Dragiewicz et al., 2018; Hinson et al., 2018). This issue also applies to messaging apps (Semenzin and Bainotti, 2020). But while researchers have begun to describe the implications of the spread of false and misleading information on encrypted messaging apps (Gursky et al., 2022), less is known about how specific affordances of messaging apps are used for individualized forms of violence and abuse.
We draw on interviews with women and queer journalists and activists in Lebanon, in order to shift attention toward populations whose safety is systemically neglected by technology companies (Costanza-Chock, 2020; Rigot, 2022). In Rigot’s (2022) terms, these populations constitute the decentered: groups who experience high degrees of marginalization. By interviewing women and queer journalists and activists, we showcase experiences of individuals who are engaged publicly on social media, and who are often targeted with harassment (Chen et al., 2020; Høiby, 2020).
Lebanon has a high WhatsApp penetration rate (Northwestern University in Qatar, 2019), and the platform assumes a pivotal role in the local information ecology. The country is centrally located within a region referred to as either MENA (Middle East and North Africa) or SWANA (Southwest Asia and North Africa; Koss, 2018). Lebanon continues to experience the consequences of cascading disasters and crises, including sectarian conflict, the 2020 Beirut port explosion, economic upheaval as well as sociopolitical tensions related to the presence of large refugee populations in the country (El-Masri et al., 2022). While Lebanon is a country that registers of comparably low importance on the radar of Western technology companies, it has an active digital public sphere (Khalil, 2017), alongside a “relatively free media landscape, liberal culture and significant participation of women in the workforce” (Melki and Mallat, 2016: 57).
In this research, we introduce the concept of infrastructural platform violence as violence propagated on, as well as by platforms. Drawing on platform and infrastructure studies (Plantin et al., 2018), and building on the notions of both platform violence (Little, 2023) and infrastructural violence (Rodgers and O’Neill, 2012), we describe how technology-facilitated, often identity-based, violence is propagated. With the term infrastructural platform violence, we point to two modalities: (1) violence propagated on platforms, wherein platforms serve as a communicative and technological avenue for perpetrators to target their victims, and (2) violence propagated by platforms, wherein a platform’s infrastructure neglects—or does not sufficiently protect—vulnerable user populations.
Literature review
Messaging apps, WhatsApp, and platform governance
Messaging apps constitute important infrastructures for communication—for work, play, political activism, information-seeking, and exchange about political news, among other functions (e.g. Agur and Frisch, 2019; Cruz and Harindranath, 2020). The most eminent platform in this domain—with more than 2 billion users worldwide—is WhatsApp (2020), purchased by Meta in 2014. As messaging apps have penetrated vast realms of sociality, WhatsApp has been referred to as a “technology of life” (Cruz and Harindranath, 2020).
Messaging platforms like WhatsApp can be distinguished from open social media platforms by a set of markers: They are typically app-based, focus on interpersonal communication, but may also include larger group communication features or even broadcast modes in which one person can communicate to a large group. This liminality—between conventional publicness of social media platforms and private communications in individual chats—can render users of messaging apps “refracted publics” (Abidin, 2021) or “counterpublics” (Trauthig and Woolley, 2023), with content trickling between public and private spheres through “cascade logic” (Gursky et al., 2022). One key feature of many messaging apps is that they are either partly, or fully end-to-end encrypted (E2EE), which means that neither platforms nor law enforcement agencies have means to access the content of communications between any two parties (Santos and Faure, 2018). Particularly in authoritarian contexts, or for behaviors that prosecutors deem illegal, encrypted messaging platforms are used by activists to stay under the radar (Agur and Frisch, 2019).
Encrypted messaging platforms posit challenges for platform governance, a term which describes the ways in which platforms regulate user speech through community guidelines, and police content through content moderation and the management of content visibility (Riedl et al., 2022; Gillespie, 2022; Roberts, 2019). On WhatsApp and other encrypted services, content moderation is typically done through metadata analysis and user reporting, since other forms of moderation would tinker with end-to-end encryption (Kamara et al., 2021). Importantly, the privateness of the platform, in addition to the ready availability of specific affordances, also make encrypted apps like WhatsApp stand out as effective tools for harassment (Aizenkot and Kashy-Rosenbaum, 2018; Dagher et al., 2018; Semenzin and Bainotti, 2020).
In 2019, Meta CEO Mark Zuckerberg announced that the company would gradually make communications across all its properties encrypted by default (Welch, 2019). While the project has not been completed, Meta announced the rollout of default end-to-end encryption for personal chats on Messenger and Facebook in December 2023 (Crisan, 2023). Some scholars have interpreted Meta’s move toward more encryption less as a response to what users want, and more as risk avoidance (Santos and Faure, 2018): When communications are encrypted, platforms moderate less, which in turn may provide an additional level of shielding responsibility over controversial and hateful content.
Harassment of minoritized/marginalized communities
Women, queer individuals, and people of color have consistently been identified as the most at-risk targets of online harassment and hate speech (e.g. Dagher et al., 2018; Hendricks et al., 2020). Twitter, Facebook, and Instagram have been particularly highlighted as online spaces where women and queer individuals (Chadha et al., 2020; Sobieraj, 2018), activists (Goyal et al., 2022; Kurasawa et al., 2021; Riedl et al., 2023), and journalists (Chen et al., 2020; Høiby, 2020; Holton et al., 2023; Melki and Mallat, 2016) have been targeted with sexist, racist, and abusive language. Scholarship in feminist media studies highlights structural arguments about gender-based harassment, and authors have articulated conceptual terms such as networked misogyny (Banet-Weiser and Miltner, 2016) or the manosphere (Marwick and Caplan, 2018) to describe endemic gender-based harassment. A rich body of work in journalism and media studies points to just how prevalent an issue harassment is for professionals in the industry, explains the gendered nature of harassment, and showcases how systemic counterefforts are sorely lacking (Chen et al., 2020; Goyal et al., 2022; Høiby, 2020; Holton et al., 2023; Melki and Mallat, 2016). For example, women journalists who experience harassment avoid social media platforms and “self-censor, or stop reporting entirely” (Koirala, 2020: 54). News articles authored by women journalists at The Guardian display higher rates of reader comments that had to be blocked than those authored by men because they were “abusive and dismissive” (Gardiner, 2018: 603). At the same time, strategies to deal with the emotional burden of harassment (Miller and Lewis, 2022) are carried on the shoulders of the very same people who are experiencing this violence (Mesmer and Jahng, 2021).
Meanwhile, scholarly interest on harassment on messaging apps has only recently begun to emerge. Hendricks et al.’s (2020) survey of queer South African university students highlights harassment occurring on WhatsApp and through SMS and victims’ different response mechanisms. Semenzin and Bainotti (2020) explain how Telegram’s affordances are conducive to misogynistic cultures that embrace gender-based violence, such as the non-consensual dissemination of intimate images. Saha et al. (2021) illustrate how WhatsApp is an important vector for dehumanizing speech about Muslims in India. Our study contributes to this burgeoning body of work by highlighting how platform-specific WhatsApp affordances are employed toward abuse.
Infrastructural platform violence
Digital platforms like WhatsApp have increasingly come to be understood as infrastructures, with scholars describing this development as an “infrastructuralization of platforms” (Plantin et al., 2018: 306). Researchers contend that big tech companies and their services have become essential utilities. The term infrastructure also describes the ways in which societal power dynamics are reflected in systems, with far-reaching consequences for the lives of the people who rely on them. Latour (1990) has argued that “technology is a society made durable” (p. 103) arguing that technology manifests power structures. While infrastructure is mostly invisible and relegated to the background for many users, it “becomes visible when it breaks” (Star, 1999: 382). Infrastructure becomes noticeable when there is a mismatch between what people expect from using it, and what their experience turns out to be (Seberger and Bowker, 2021). Messaging apps such as WhatsApp have been described as both “social” and “corporate-computational” infrastructures; they are “social” because their infrastructures cater to the communicative needs of users—such as easy and accessible texting—and they are “corporate-computational” in that the platform’s ultima ratio is to extract monetary value from users (Pierson, 2021).
If infrastructure reflects how society is governed, it stands to reason that infrastructure is also complicit in how violence is propagated in society (Rodgers and O’Neill, 2012). For instance, cisheteropatriarchy, identity-based hatred, and gender-based violence are all exacerbated by “administrative norms or regularities” (Spade, 2015: 29) inherent to infrastructures. In other words, hegemonic frames of reference and values promoted in society—such as misogyny, homophobia, or white supremacy—are reflected in the ways administrative processes unfurl. Infrastructural violence, then, refers to the ways in which “the workings of infrastructure can be substantially deleterious” (Rodgers and O’Neill, 2012: 403). This can be understood either as active or passive—infrastructure may be created with an explicit intent to harm (active), or infrastructure renders harm through haphazard design choices and disregard for their consequences (passive; Rodgers and O’Neill, 2012). In the realm of digital platforms, data violence (Hoffmann, 2021) describes the ways in which technology advances violence through systems of data classification and labeling. Platform violence (Little, 2023) encapsulates how violence is propagated through platform design choices. Facebook’s Memories function, for example, has been found to retraumatize survivors of sexual violence by showing them photos of perpetrators, “even without active use by the abuser” (Little, 2023: 17). As a larger umbrella term, infrastructural platform violence describes how algorithmic bias, programming bias, or dataset bias produce harms, the unintended consequences of haphazard design, and the ways in which platforms respond to regulation or economic, political, and social change, to name but a few. 1 This makes apparent the need for platforms to incorporate ethics of care frameworks in how their infrastructures are thought up and rolled out (Paris et al., 2023).
We seek to emphasize the important role that identity—often gender identity, but also sexual orientation—plays in the domain of technology-facilitated abuse. In our study, we adopt the notion of technology-facilitated identity-based violence, which centers “action by one or more people that harms others based on their sexual or gender identity or by enforcing harmful gender norms” (Hinson et al., 2018: 1) mediated through technology. Building on platform studies, infrastructure studies, and work on technology-facilitated identity-based violence, we introduce the concept of infrastructural platform violence. We understand infrastructural platform violence as bifurcated: It is identity-based violence propagated on platforms, and identity-based violence propagated by platforms due to infrastructural neglect of harms, which disproportionately affect vulnerable populations. Our framework surmises that infrastructural violence by platforms must primarily be understood as a form of passive violence. Unlikely created with an intent to render violence, platforms nonetheless create “socially harmful effects [. . .] from infrastructure’s limitations and omissions” (Rodgers and O’Neill, 2012: 407). Framing violence here as an infrastructural issue is helpful as it “situates blame and responsibility” (Rodgers, 2012: 432) not only with individual perpetrators but also with infrastructure. We agree with Dragiewicz et al. (2018) observation that the “issue of gender-based violence and harassment has not been a priority within the tech industry” (p. 618), despite the impact it has on how women and queer people use the Internet. Not making it a priority, however, is reflective of how platforms are not neutral infrastructures (Hallinan et al., 2022), since inertia further cements the status quo.
Rigot (2022) defines minoritized user groups as the decentered, people that are “often those that face highest marginalization in society, without legal, social or political support structures [. . .] they are often criminalized and at risk of having technology weaponized against them” (p. 2). In the same vein, Costanza-Chock (2020: 77) explicates how technological design’s focus on a very specific subset of the population—often American, white, able-bodied, and male—creates what they term “a spiral of exclusion,” thus further reproducing—intentionally or not—the oppression of already-oppressed populations. We posit that only by centering marginalized users and their experiences outside a WEIRD (Western, Educated, Industrialized, Rich, and Democratic) point of view we can start unfurling this spiral of exclusion. Against this background, we ask the following research question:
RQ. How do women and queer journalists and activists in Lebanon experience and describe the consequences of infrastructural platform violence on WhatsApp?
Methodology
The case of Lebanon
Lebanon is a small country located in the MENA/SWANA region (Koss, 2018) with an ethnically and religiously diverse population. Its politics are shaped by sectarian clientelism, and its geographic position has made it host and refuge for many displaced people over the course of history. Over the last few years, the country underwent an array of reciprocally compounding crises of economic, pandemic, political, and social nature. An unsustainable debt crisis and the government threatening to leverage a so-called WhatsApp tax to charge people for using voice-over-Internet communications culminated in October 2019 in what some have called the WhatsApp revolution (Alagha, 2022; Merhej and Qureshi, 2020). These protests, which launched in Beirut but spread across the whole country (Merhej and Qureshi, 2020), were defined by heavy use of digital technologies among young people who came out to protest. Their chants included calls to end the political and sectarian system and violence against women (Alagha, 2022), aligning with an “active women’s rights movement” (Melki and Mallat, 2016: 58) in the country.
In 2019, 94% of the Lebanese population reported having access to the Internet, and 92% said that they used WhatsApp; this compares to 78% for Facebook, 68% for YouTube, 56% for Facebook Messenger, 45% for Instagram, 17% for Snapchat, and 10% for Twitter (Northwestern University in Qatar, 2019). In total, 33% of Lebanese say that social media is their primary source of information, and 52% say they “agree”/“strongly agree” that they trust information from social media more than that from newspapers or television (Wee and Li, 2019). The media system in the country is characterized by a private partisan ownership structure (Khalil, 2017). Confidence in public institutions in Lebanon has been on the decline for years leading up to the 2019 revolution (Fakih et al., 2020). Research into Lebanon serves as an important comparative point of reference for countries in the MENA/SWANA region, as well as for information ecologies with a high WhatsApp penetration rate.
Data collection
The study received Institutional Review Board Approval at The University of Texas at Austin on 10 June 2022. We conducted semi-structured in-depth interviews with women and queer Lebanese journalists and activists through Zoom between 8 September 2022, and 4 January 2023. Our sample of 14 people included six journalists, five activists, and three digital rights experts. We identified potential interviewees by researching the names of journalists and activists who had spoken publicly about experiences of harassment and violence, individuals who had covered or were involved in the October 2019 protest movement, and through personal contacts of one study author who had previously worked at a Lebanese digital civil rights organization. While our primary outreach focus was women, we were also keen to speak with queer people regardless of their gender identity, to address intersectional forms of marginalization, harassment, and identity-based violence, focusing squarely on the experiences of those who live outside the comfort and power of cisheteropatriarchal experiences (Cheaito, 2022; Clark-Parsons and Lingel, 2020; Meyer and Denise, 2014). Among our sample, nine interviewees identified as women, four as men, and one identified as non-binary. Interviews ranged between 45 and 90 minutes and were recorded and transcribed using automated transcription software. We then created memos—shortened, condensed, and annotated versions of transcripts. In cases where original interviews were conducted in Arabic, memos were translated into English by a study author. We asked participants to provide pseudonyms to protect their anonymity and safety, though we are honoring the request of one participant who insisted on using their real name.
Analysis
We used coding techniques developed in grounded theory (Corbin and Strauss, 2015) to parse through 123 pages of single-spaced memos in the computer-assisted qualitative data analysis software Atlas.ti, starting with a list of open codes created by one study author. Iteratively expanding this code list through constant comparison, a minimum of two study authors open-coded each transcript. We repeatedly met to explore how and the extent to which codes were interrelated in axial coding.
Results
Our analysis points us to three overarching themes. The first one highlights the ways in which abusers on WhatsApp employed specific platform affordances to propagate harassment and gendered violence. The second theme illustrates individual as well as societal consequences of platform violence. The third and last theme explores how interviewees perceived infrastructural failures of platforms and the corresponding neoliberal push toward individual resilience.
Infrastructural platform violence on WhatsApp
Violence on WhatsApp is an expansive domain, where perpetrators combine conventional tactics like (sexualized) forms of doxxing and threats with more sophisticated techniques such as subverting account safety features of the platform, alongside harnessing platform-specific affordances like WhatsApp groups. Groups are a core feature of WhatsApp, and allow individuals to communicate with trusted friends, family members, or work colleagues. They can become focal vectors for violence and harassment. Sami, a digital security practitioner and digital rights activist, explained how perpetrators target individuals that they want to kick off the platform by adding them to groups: Someone sends you a message telling you that your WhatsApp is going to be suspended, and they give you a countdown. They type the countdown for you, like: “5, 4, 3, 2, 1,” and by the end of the countdown, your WhatsApp actually gets suspended. We eventually found out that those accounts were suspended because those malicious users were adding the user—the victim in this case—to a group, and the profile picture of that group actually contained pornographic images for [note: the interviewee meant “of” instead of “for”] children.
Groups were also used as a vehicle for coordinated attacks on individual users. Yasmine, an activist and former student organizer during the 2019 revolution, explained, They’d add me to a WhatsApp group and then it would be 30 people in the WhatsApp group. And then they would send messages together, like ridiculing, cursing, attacking. [. . .] So, at some point, I was still, you know, taking screenshots maybe and then exiting the groups, but it became faster than I could handle.
Another tactic brought up during interviews were hostile account takeovers. Attackers would attempt to reset accounts by triggering a message with a code to victims. According to Sami, they would then text “Hey, I sent you this code by mistake, can you send it back to me.” Upon sharing the code, “within seconds the WhatsApp number of that person was deregistered, [and] was taken over by the hacker.”
Across interviews, many shared having experienced doxxing, a key area of concern. Georgina, a journalist focused on press freedom and freedom of information, explained how perpetrators sometimes shared the phone numbers of their victims in WhatsApp groups alongside instructions on what to do next: “You can start targeting. Let them know this and that.” Perla, who was a TV reporter and talk show host in Lebanon during the 2019 revolution, described how she had received swaths of threatening messages from Hezbollah
2
supporters: I started getting a lot of WhatsApp messages from accounts with the picture of Hassan Nasrallah [note: the secretary general of Hezbollah] as profile pictures. There was so much offensive language, so much cursing but also death threats. Someone sent me a message like “If I see [you] on the street, I’ll run you over and take your body to your family and kill them,” and stuff like that.
A friend of hers, who worked at a Hezbollah-controlled media outlet, told her: “Perla, there is not one Hezbollah WhatsApp group that does not have your phone number on it,” to warn her of the avalanche of harassment that was coming her way. Yasmine also described how she had become the target of Hezbollah supporters’ harassment campaigns after she was filmed chanting at a protest: I think it started with WhatsApp. And then when I understood what was going on, I immediately deactivated my public social media accounts and then understood that my picture was being circulated with my name and number, encouraging people to message me.
Fawzi, an activist at a humanitarian organization who also has a background in technology development, experienced similar patterns of abuse. Someone stole his picture from his WhatsApp profile, and subsequently used the photo in a video that was peddling a conspiracy theory accusing him of working in Lebanon as a foreign agent. Fawzi said, This video spread like wildfire, and it was psychological torture because people kept sending it to me, and so many people knew about it. I became obsessed with the idea that my colleagues who worked in different regions and maybe didn’t know me saw that I was now being accused of being a spy and wondered if it was true [. . .] I know that my father is one of those people who get a lot of messages from WhatsApp news groups, and I was really worried that he’d get the video through these groups.
The doxxing campaigns described by our interviewees encompassed a broad spectrum of violent behaviors. Frequently, close friends and family members were threatened. Such campaigns often transcended the bounds of one platform, as Layla recalled, I was on Twitter, and at some point, I started getting a lot of followers. [. . .] I wrote something on the conflict in Syria, at the time I think the Ghouta massacre had just happened [. . .]. I wrote something about it on Twitter and someone sent me a message on WhatsApp [saying] that “I hope that your kids will die the same way the children in Ghouta died.” The Syrian regime had dropped chemical weapons on them.
Maria, an activist and former student organizer during the 2019 revolution, experienced “indirect doxxing,” which she explained as follows: “It’s kind of an implicit threat: I know about your family, I know where you live. I know that your mother works in that place, or that your sister lives there.”
Participants described instances of sexual violence, ranging from extortion and threats of sharing intimate images, to receiving dick pics, rape threats, and sexualized voice recordings. Roy, who works for an international non-governmental organization (NGO) tracking rights violations against MENA/SWANA journalists, recounted how he had received sexually coercive blackmail messages on WhatsApp when he was younger: It was really, really traumatic. I was nineteen; by then I was really not out, and someone was threatening to publish photos of me that he had, I had sent him. [. . .] There was this guy who was like: “If you don’t sleep with me, I’m gonna publish these photos.”
Maria characterized the nature of violent threats as typically highly gendered. She said, Obviously, they sexualize it, and because I’m a woman [. . .] I would always find comments particularly about me and the other women activists who are saying: “Oh, I know where you live, I’m gonna come and rape you.” Or: “Be careful, you’re gonna be raped someday soon,” or just very sexually charged threats or language.
Another interviewee, Stephanie, a television reporter who is also an activist campaigning for workers’ rights, described how she had received unsolicited dick pics on WhatsApp: “Once, someone was WhatsApp-ing me from a number that was not even from Lebanon, and they sent me nudes. Or, for example, I’d get messages like ‘Aren’t you [Stephanie]?’ and then he’d send me a dick pic.” Perla said that voice notes, short audio recordings sent between WhatsApp users, were also used by perpetrators to exert sexual violence: “I think there were a number of guys who were masturbating while they were sending us these voice notes.”
Infrastructural platform violence on WhatsApp manifested in the powerfully deleterious combination of conventional strategies that harassers apply, with sophisticated knowledge of platform functionality to further exacerbate harm.
Corollaries of abuse
When it came to repercussions of abuse, participants’ lives were most affected in two domains: The first were individual-level consequences affecting journalists and activists, such as emotional exhaustion or reputational damage. Just as insidious were societal-level consequences: participants withdrew from the public sphere, or self-censored their speech as a direct consequence of experiencing platform violence.
Individual-level consequences
Emotional exhaustion loomed large across interviews. Stephanie described how being doxxed, “was an experience that traumatized me for a while, although I know he is physically far, just sitting on his phone, but despite that, he had an impact on me.” Perla, who had received a flood of messages on WhatsApp, did not want to give her harassers the satisfaction of changing her phone number, so she asked her sister for help: At that point, I just handed the phone to my sister and told her to block all the numbers that she didn’t know and that weren’t saved as contacts. She started reading some messages, and in less than a minute, I saw her just start to cry hysterically because the nature of the content was very disgusting, very ugly, and very offensive.
Other participants also felt the repercussions of experiencing platform violence on WhatsApp. Layla, a journalist in a leadership position at a pan-Arab media organization, said that “WhatsApp started to give me anxiety, to be honest.” Georgina said that the emotional exhaustion was part and parcel of what harassment campaigns were aiming at: “if you engage, and if you reply to anyone targeting you, these campaigns will rise, and they will keep playing on the mental health factor until you shut down your account.”
Reputational damage was another concern that activists and journalists we spoke with took seriously. Mona, for example, an exiled anti-Hezbollah journalist who covered the 2019 revolution, said, When I was in Lebanon, I was always worried about having a fabricated allegation of treason. You know how it’s like, any security apparatus can falsely accuse you of treason, and you’d sit in jail for many months where they’d properly “educate” you, as they’d say. And when you get out, even if you are exonerated, that stain on your reputation will always, always remain.
In multiple cases, interviewees resorted to a drastic response: Switching off an account altogether. Yasmine remembers how she was overwhelmed by the sheer number of messages that needed reporting to the platform: “I shut down my phone and switched, like I opened a new WhatsApp account on a new number. I don’t know if I ever went back to the old one.”
Societal-level consequences
Many of the participants described that they had been targeted for their issue stance. Georgina explained how social justice-oriented activism, as well as sectarian association, affected how one would be harassed: So, these parties target activists and journalists who mainly are or were part of the protest in 2019, or who advocate for a civil society, or who advocate for [. . .] social justice, political justice, [. . .] change in Lebanon [. . .] and especially if they advocated for women’s rights, LGBTQ rights and anything that revolves against [note: meaning “around”] human rights.
She also acknowledged that “your sect really plays a huge role in what you can say, and what you can’t say, and who’s gonna target you and how real can that threat be.” Layla remembered an instance wherein she had publicly condemned the then-head of the Lebanese Anti-Cybercrime Bureau: There was a crackdown on free speech activists and normal social media users, they were being summoned to the Anti-Cybercrime Bureau. So, I was very vocal on Facebook about Suzanne al-Hajj and criticized her openly, saying things like “Are you out of your mind?” and “You want to bring us back to the times of the Syrian occupation?” So, anyway, I get an SMS and a WhatsApp message that said: “If you continue talking about Colonel al-Hajj, we’ll throw you in prison like your friends, you little bitch.”
Intimidation tactics led to different forms of self-censorship among our interviewees. Mona said that “there comes a period when one is forced into self-censorship depending on the situation. [. . .] There was a time it was just too dangerous to talk, and things were escalating, so I had to self-censor.” Yasmine, who had deactivated an account after being attacked, said the harassment campaigns “worked in terms of silencing me.” Perla said that she had effectively been silenced as well: “There was a period of time in Lebanon when I wouldn’t dare text on WhatsApp because we journalists were getting a lot of harassment.” Roy, reflecting on the dynamics of mob justice online, pointed out: “If we were two hundred years ago, and there was a mob stoning a woman in the street, we wouldn’t say that’s free speech. We would say that’s violence.” Diana, a veteran journalist who founded a Beirut-based pan-Arab news website, said that she was primarily concerned about younger colleagues: “Many of them tend to shy away or stop saying their opinion.”
When infrastructural platform violence manifests in emotional exhaustion, anxiety, and reputational harm, democracy suffers, and abusers get their way once journalists and activists—as a matter of self-preservation—resort to self-censorship or even withdrawal from the public sphere.
Platform failure and individual resilience
In describing infrastructural platform violence and its ramifications, this last theme focuses on the central responsibility that platforms like WhatsApp assume in mitigating harms. Roy said that he was “not fooled by a genuine political interest in the well-being of people at the top of these platforms,” that they were “just driven by money and gains [. . .] and [. . .] do not have a conscience.” Several interviewees highlighted how location factored into whether platforms cared about user safety, pointing to a hierarchy between MENA/SWANA countries. Sami, speaking from his experience as a digital security practitioner, said, Sometimes in Iraq they simply don’t reply, or they simply ignored the email. And in Libya, the person I was trying to help in Libya, they [social media platforms] never replied to any of the cases. [. . .] I mean, all of us are dark enough for them. But even with that, there is discrepancies in responding, for sure. [. . .] Probably if it’s a Saudi account, they would respond in a few seconds. Location matters, and money matters.
Interviewees generally thought platforms were unreliable stewards of their safety. Yasmine said, “it never even occurred to me to get support from WhatsApp,” while Fawzi said, There comes a point when you realize that their [platforms] relationship to the community is a relationship of [. . .]: “Please, I’m begging you to do this because I’m dying, but I need to use your application even if your application is fucking me over.”
The people we spoke with noted how they were reliant on platforms as critical communication infrastructures while at the same time being at their whim. Layla pointed out the discrepancy between platforms’ technological development and the relative lag of safety mechanisms. She said, “technology is evolving in a x5 or x10 speed, and the platforms are taking action at a x1.5 speed. [Accountability] measures are much, much slower than the evolution of the mechanism[s] of harassment or bullying.” Distrust in platforms was rooted in personal experiences—not only with harassment but also with platforms not taking seriously the work of activists. Maria recounts, I personally don’t really believe anymore that Facebook and others have an authentic intention to remove these kinds of posts or to moderate them, even though I know at work in my previous job, we developed keywords [note: for content moderation] for them in English, French, and Arabic, particularly about racism in Lebanon, but they just didn’t use them.
Disillusioned participants resorted to mechanisms of self-preservation that would allow them to continue using platforms. Appealing to users’ own responsibility and individual resilience, as platforms do, puts the onus of dealing with abuse on the people who experience it. Mona said that “you expect that you will be targeted [. . .] and I was ready, to a certain extent, psychologically ready and I knew and expected this patriarchal discourse, which helped me a lot.” Elena, a student activist and political candidate for parliament, said that “you prepare yourself. You expect this. It allows you to grow a thick skin, I think.” Participants employed the tools that platforms provided—blocking and reporting—though those affordances could also backfire. Perla said that “it felt like I stopped working as a journalist, all I was doing was blocking people.” Georgina said that “if you block a huge amount of them [note: harassers], they will keep coming, and they will use different numbers [. . .]. And maybe, after you block them, they will be more aggressive.” Others had normalized blocking as an everyday routine. Diana, for instance, said that “you block them, and then move on.” Stephanie said that she would go a step further in particularly egregious cases: If someone crosses a line, I block. I don’t even entertain a conversation, immediately I block. Now, if someone goes too far [. . .] I screenshot and I post. I don’t have a problem; on the contrary, I insist on exposing them.
Participants said that they would occasionally also use the reporting tools that platforms offered. Across interviews, however, reporting was fraught as it was not always clear whether it would ultimately lead to a response from a platform. Mona said, There are some places where if someone threatens you on social media, you can easily lodge a complaint. But there are countries like Lebanon [. . .] where these things would just be swept under the rug. You feel like the platforms are not attentive to the context of each country, and I think that’s important to point out.
Similarly, Maria told how she had reported accounts to WhatsApp, but did not “remember if we ever heard back.” Georgina thought that “reporting on WhatsApp isn’t really common knowledge among people,” pointing to a possible literacy gap in what safety features existed and how they could be used. Mayssa, an activist working at a freedom of expression-focused NGO, suggested that “you can’t work on the question of [platform] responsibility if you didn’t work on media literacy as well.” Halim, a digital rights activist, researcher, and writer, said that people simply did not know enough about online safety: “I mean, a lot of people have no idea about the Internet, how the Internet works, and they are not really interested to know.” All these dynamics underscore how online safety on WhatsApp presents an individualized burden. Perla summarized that “it requires personal effort for you to keep your profile safe, which is unacceptable.” Maria described it as a “burden when I have to report on my own certain things.”
Disillusionment, a sense of abandonment, and even gloom defined what journalists and activists thought they could reasonably expect from platforms: little to nothing. Though they acknowledged tools for reporting and recourse, they hardly ever relied on them, and felt let down by platforms.
Discussion
In this study, we conceptualized the notion of infrastructural platform violence, theorizing this type of violence in a twofold manner: First, as identity-based violence performed on platforms, and second, as violence performed by platforms, manifest in their systemic and infrastructural neglect of vulnerable populations.
By drawing on the experiences of women and queer journalists and activists from Lebanon on the messaging platform WhatsApp, we showed how the platform serves as a productive infrastructure for perpetrators of violence. People subverted content moderation practices to get victims’ accounts suspended. WhatsApp groups were used as launchpads for the coordination of attacks through doxxing and non-consensual sharing of personal information or images. Simultaneously, groups also were the scenes of crimes: all it took was for a mob of people to get a hold of a victim’s phone number and “invite” them into a group. WhatsApp is a prolific platform for abuse, including for identity theft, the spread of conspiracy theories and false information about individuals, as well as death and rape threats. Much of this violence is identity-based—targeting victims for their gender identity and/or sexual orientation.
If infrastructure only becomes visible when it breaks (Seberger and Bowker, 2021; Star, 1999), those who are frequently targeted by violence must think that WhatsApp is permanently broken. It remains an open question whether this is a deliberate result of the platform’s design, or if the experiences documented in our study are the consequences of Western platform owners’ short-sighted, imperialist conceptions of what harms marginalized user populations might experience. Based on our methodology and data, we were able to provide evidence for infrastructural platform violence perpetrated on platforms. Future research should track platform policy changes and design over time, triangulating such information with interviews with former trust and safety team members to explore both the possibility of active (intentional) and passive (unintentional) infrastructural platform violence by platforms.
Rodgers and O’Neill (2012) have argued that infrastructure is complicit in the propagation of violence. When individuals experience infrastructural platform violence, this has devastating effects that permeate all aspects of a person’s private and professional life. Emotional exhaustion and trauma from dealing with abuse, alongside the looming threat of reputational damage, led many interviewees in our study to suggest that campaigns to silence them had been successful.
Our study contributes to the literature in four distinct ways. First, the concept of infrastructural platform violence allowed us to provide empirical evidence for the experiences of marginalized individuals at the hands of perpetrators intimately familiar with how platform affordances can be used to promulgate violence. Furthermore, we provide a conceptual vehicle to describe platform inertia as a corrosive force with the potential to facilitate harm. Not doing anything, or doing too little too late, promotes the veneer of neutrality while siding with perpetrators. Our framework opens up platforms to further explorations from a design justice perspective (Costanza-Chock, 2020): Platform operators must involve people at the margins in their design, testing, and considerations (Rigot, 2022).
Second, on a practical level, our research complicates questions about where the responsibility of a platform starts, and where it stops. Nefarious activities happen in any space that provides communicative affordances; encountering them in messaging apps comes as no surprise. Our study adds to contemporary platform governance debates by loudly asking the question under what circumstances platforms lose their liability shields and become responsible for nefarious use—and violence (Jurecic and Pompilio, 2023).
Third, we contribute to the literature on identity-based harassment by showcasing how the journalists and activists we spoke with confirm what prior studies have established: Experiences of virulent sexual harassment and gender-based discrimination (Melki and Mallat, 2016), self-censorship as a consequence of doxxing and harassment (Koirala, 2020), and the all but certain reality that no one—not news organizations, not platforms—was going to come to their rescue (Holton et al., 2023; Mesmer and Jahng, 2021).
Fourth, our study highlights a sense of abandonment: WhatsApp—and platforms writ large—were not seen as reliable stewards of user safety by our participants. Experiencing neglect furthers the neoliberal mantra of individual resilience—that it is a user’s responsibility to protect themselves. Platforms directly rely on victims to do the work of dealing with abuse, for example, by using blocking and reporting tools. Merely hoping that industry self-regulation, combined with platforms’ risk avoidance impetus, will transform them to responsibly steer user safety does not suffice. Asking critical questions about platforms’ values and ideologies becomes key. This could entail legal requirements for platforms to transparently disclose their infrastructural values—a term which refers to the “guiding beliefs about desirable conduct enacted through technological artifacts and systems, both intended and unanticipated” (Hallinan et al., 2022: 204). If WhatsApp constitutes an infrastructure because it is an essential communicative utility (Plantin et al., 2018), it is sensible to ask: infrastructure for whom? Our study shows that WhatsApp certainly is a productive infrastructure for millions of people in non-WEIRD countries, but that includes those who propagate harassment and violence. Focusing on WhatsApp’s harms is not to negate the platform’s pivotal democratic, interpersonal, and media role in public, semi-public, and private communication. Rather, imagining platforms as more sensitive, caring, and public-interest oriented infrastructures (Zuckerman, 2019) requires rethinking the very core of how surveillance capitalism permeates business models.
Limitations and future research
This study is not without limitations. Asking participants to speak about experiences of violence and abuse can be a significant burden. Furthermore, while we contribute a novel theoretical concept, our methods only allow us to provide evidence for infrastructural platform violence on platforms. Future research should explore designs that can provide empirical evidence to infrastructural platform violence—active or passive—committed by platforms. Researchers would be well-advised to investigate networked and cross-platform dynamics of infrastructural platform violence in tandem with tracking the continued evolution of WhatsApp. Above all, work in the platform design as well as the trust and safety space must center the interests, goals, and lived experiences of individuals who experience abuse.
Footnotes
Acknowledgements
The authors thank the interview participants for their willingness to speak with us and for providing critical insights.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study is a project of the Propaganda Research Lab at The University of Texas at Austin’s Center for Media Engagement (CME), where research is supported by Omidyar Network, Open Society Foundations, the Miami Foundation, and the John S. and James L. Knight Foundation.
