Abstract
Discourse has long been recognized as a source of symbolic violence, perpetuating power relations and reinforcing existing social hierarchies. With the rise of social media platforms, the influence of discourse on society has gained renewed attention. These platforms, while enabling social interactions, also serve as catalysts for violent behaviors, reinforcing and legitimizing forms of oppression and symbolic violence, particularly the violence of language. While the concept of toxicity is frequently used to describe this phenomenon, its meaning and connection to language often remain unexplored. This article aims to address this gap by examining the significance of toxicity in discourse and how the infrastructure of social media platforms facilitates the emergence of toxic discourses. It argues that while toxicity and violence are related, they are distinct phenomena. Toxicity, as a dimension of symbolic violence, contaminates debates and discourses, and is enabled by the characteristics of platformization in online interactions. Thus, toxicity is an effect of platforms mediating social interactions.
Introduction
Discourse is often perceived as a source of symbolic violence in the process of domination and naturalization of power relations (Bourdieu, 1991). It also reinforces these existing power relations in our society, thereby enabling the emergence of other forms of violence. This means that language, as well as meaning, has important impacts on how society is shaped and on domination. However, the central idea of how discourse influences society has gained renewed focus since social media platforms have permeated social interactions in contemporary society. These tools serve as catalysts for violent behaviors (Alrawashda, 2020), reinforcing and legitimizing forms of oppression and symbolic violence, such as the violence of language (Recuero, 2015). The introduction of platforms and their economic structures, which aggregated diverse channels and applied the same infrastructure and logic to all of them (van Dijck & Poell, 2013), not only expanded the negative effects that were already present in online communication, but also created new ones. In this article, we argue that one of these effects is discourse toxicity.
Although many authors have used toxicity as a concept, there is a lack of discussion regarding its meaning and its connection to language. Toxicity is a frequently employed but a rarely explained concept. To address this gap, this article aims to discuss what toxicity signifies in discourse and how the infrastructures of social media platforms allow violent discourses to become toxic. We argue that while toxicity and violence are sometimes used interchangeably, they are inherently different concepts. Toxicity, as it poisons the debate and discourse, represents a dimension of symbolic violence facilitated by the characteristics of the platformization of online communication. Consequently, it is an effect of platforms mediating social interaction. This argument is important as it can help understand the effects of platforms on social interaction and violence around the world, as well as to discuss a theoretical view of the concept of toxicity.
To explain our argument, we begin by reviewing concepts of violence and discursive violence, subsequently establishing connections between these concepts and online platforms. We then proceed to explain how toxicity can be viewed through these perspectives as an effect of platforms mediating violence. Finally, we discuss the characteristics of toxic discourse and explore its potential effects on society and discourse.
Discourse and Violence
Violence is a concept deeply intertwined with human perception and experience of the world. It is also a complex and challenging concept, especially when connected to discourse. Numerous philosophers have attempted to define violence in relation to human language. Weil (2012), for instance, argues that language reveals (human) violence by enabling its naming. Language, thus, not only names violence but also constructs it. Ricoeur (1998) posits that while language and violence are opposites, they are also mutually influential. When one discourse seeks dominance over others, language becomes a tool of domination and a means through which violence manifests itself. In his work, violence is not seen as something inherent to language. Instead, language is understood as a medium for violence, particularly discursive violence.
Žižek (2007) argues that violence assumes major forms: subjective violence, which is directly perceptible and material, caused by an agent such as crime or terrorism; and objective violence, which lacks a clear “agent” and often operates in the background, directly linked to outbreaks of subjective violence. Poverty and domination, for instance, represent forms of objective violence. Language, therefore, can also serve as a means of subjective violence. Importantly, for this discussion, both types of violence are interconnected. There is also systemic violence, which pertains to the violence inherent in social structures, and finally, symbolic violence. Symbolic violence occurs through language and culture.
Symbolic violence is also a concept used by Pierre Bourdieu (1991). The author focuses on the relationship between power and culture. He explains that symbolic violence operates through culture and language, allowing dominant groups to establish and reinforce power by naturalizing power relationships within society. Symbolic violence functions by creating and naturalizing knowledge, thereby manipulating people’s beliefs and attitudes. Language is the medium through which resistance can occur, but it is also through language that dominance can be imposed. Symbolic violence is not always perceived in the same way as other forms of violence. Žižek (2007) shares a similar perspective with Bourdieu, but to Žižek, symbolic violence is deeply ingrained in society, pervasive, and often operates unconsciously within social structures. Symbolic violence legitimizes exclusion and other forms of violence, providing a basis for their operation.
Discourse is a distinct concept from language. According to Foucault (19710/1996), discourse refers to a system that produces patterns of meaning that reinforce power relations established through language. Discourse generates knowledge and is embedded in the power structures of a given society and period. Fairclough (1989) shares a similar view, focusing on discourse as a “social practice” or the impact language has on society and the institutions that shape language usage. He views discourse as encompassing the social and linguistic aspects of communication, considering the multidimensional interaction between language, society, and power. In this article, discourse is considered a form of social practice, aligning with Fairclough’s perspective, wherein it both reflects and constitutes power relations. This means that discourse is not just “talk,” but it influences the practices, habits, and culture that a certain society has. While other authors provide us with important concepts on discourse and violence, it is the critical discourse analysis view that provides us, in this article, with the idea that discourse constitutes the domain of social practices and thus, concepts such as legitimation, spread, and violence can be connected.
Through this discussion, we come to understand that language can be a form of violence in itself, as it creates, legitimizes, and naturalizes knowledge about the world. Discourse, thus, can be used to legitimate power relations, to instigate violence and to naturalize domination. This form of violence is known as symbolic violence, which is legitimized by social practices and discourse and serves as a means of control. In these cases, symbolic violence, thus, is a form of violence from language itself, as it allows the legitimation of the domination process and the reinforcement of oppression relationships and allows violence to be the punishment for the outliers in relations of power (gender violence, racism, and others, for example).
Legitimation is a concept often described in terms of how a particular discourse is recognized as aligned with the values of a certain group or society at a given time (Reyes, 2011; van Leeuwen, 2007; van Leeuwen & Wodak, 1999). It refers to strategies employed to reinforce and disseminate certain discourses while reinforcing social norms and power relations. Although these strategies may vary in the literature, they share a common key point for this discussion: they rely on forms of credibility, tacit agreement, or acceptance from the audience. Violence can also be legitimized through discourse, often by political leaders, for example (Oddo, 2011), as they are legitimated based on the authority that comes with the position. Through legitimation, symbolic violence can be encrusted in social relations, creating, reproducing, and reinforcing structures of power and domination. These concepts are relevant as they set the ground from where we want to explore the connections between language, discourse, and online violence to explain toxicity online.
Historically, the internet has served as a space for the development and expression of aggressive behaviors, and social media, in particular, has been directly linked to the proliferation of hate speech (Gillespie, 2010). Online violent discourse has been studied under various concepts, such as hate speech, incivility, cyberbullying, trolling, and more (Anderson & Huntington, 2017; Calvert, 1997; Paz et al., 2020; Rossini, 2022; Sponholz, 2020). We will further examine these debates on previous research about discourse, language, and violence online so we can stablish the differences and similarities between these concepts and how they differ from each other and can be related to the idea of toxicity we want to explore in this article.
Hate speech refers to discourses that “antagonize or marginalize people based on their identification with a particular social or demographic group” (Elliott et al., 2016, p. 2). Hate speech encompasses specific types of violent discourse, such as racism (Matamoros-Fernández & Farkas, 2021), extremism (Inwood & Zappavigna, 2023), religious or gender-based attacks (Nazmine Khan et al., 2021), and other forms of xenophobic discourse (Paz et al., 2020). Other authors, such as Schoenebeck et al. (2023), include hate speech within a broader category called “online harassment,” which also includes insults and doxxing. Hate speech is, thus, a type of discourse that contains explicit violence against others, which is like the idea of incivility, which other authors work with.
Incivility is a broader term that has been employed to describe aggressive and disrespectful discourse, which may also encompass hate speech in some instances (Antoci et al., 2016). Rossini (2022) explains that incivility differs from impoliteness and intolerant discourse, as it describes content that is more threatening, aggressive, and harmful than mere impoliteness, while intolerance describes discourses that are “toxic” in the sense that they can harm political discussions and democracy. Although this work focuses on the impact of such discourses on democracy, the author provides an interesting perspective on what is considered toxic, as it pertains to discourses that have the potential for a greater impact on social discourse. Anderson et al. (2018), on the contrary, explain that incivility can have a detrimental effect on conversations over time, leading to increased polarization and influencing the perception of online content. Other authors have also employed the concept of incivility in discussing general discourses on social media (see Antoci et al., 2016).
Similarly to the concept of “hate speech,” incivility is also a concept connected to explicit aggressive, hateful, or violent content. However, explicitly aggressive content is not the only one that can depict violence. There are other types of discourses that can also be violent. The concept of symbolic violence has been used to describe a broader spectrum of violent content. Symbolic violence has been used to examine types of violence that may not be as apparent, such as humor and passive-aggressive discourses, as well as other forms of reproducing and legitimating oppression (DeCook, 2018; Lumsden & Morgan, 2018; Recuero, 2015). Unlike the other concepts, symbolic violence has been used to describe, for example, violence contained on memes (Nascimento & Bispo da Silva, 2021). In this specific case, authors described how memes shared on WhatsApp groups may constitute blocks of meaning that reinforce gender stereotypes and thus, some degree of symbolic violence.
These studies primarily focus on how discourse constitutes violence, either explicitly or not, rather than its potential for toxicity. The concept of toxicity is less frequently discussed in the literature and, many times, as a synonym of hateful/harmful/explicitly violent content (see, for example, Kwak et al., 2015; Sheth et al., 2022). Although, as we will further argue, a toxic discourse is not the same as hare speech or symbolic violence. Toxicity is a dimension particular to discourses that spread, and, in this spread, can potentialize the violence they generate. Toxicity, thus, has an effect on the community, either by silencing opposite discourses or by legitimating the violence through the spread of the idea. It is thus a characteristic of certain violent discourses. This approach is similar to the original usage of the concept (Wexler, 2013). We will further explore these ideas in section “Toxic Discourses: Toward a Concept.”
In this article, we will explore how the concept of toxicity can provide a valuable framework for understanding different dimensions of online discursive violence. However, before delving into that, it is essential to examine the context in which this online discursive violence occurs: the platforms.
Platforms, Violence, and Discourse
Social media platforms play a crucial role in understanding the distinctions between online and offline discourse and the diverse effects that this violence can entail. Poell et al. (2019) explain that platformization is a process that arises from the platforms themselves as entities. They argue that platforms have permeated and transformed society, as their technical infrastructure facilitates the establishment of institutional and economic relationships that are reshaping societal dynamics. We contend that the structure of platforms, driven by their economic logic, is the fundamental context in which toxicity emerges, as it allows discursive violence to be quickly spread and legitimated.
Gillespie (2017) argues that the term “platform” is used by the industry and media players to signify that these tools offer “opportunities” for communication and meaning creation, rather than emphasizing their economic infrastructure. Conversely, Poell et al. (2019) emphasize that platforms are “infrastructures” that actively “shape interaction.” By shaping interaction, platforms also shape discourse. They not only create “public spaces” but also exert influence over them, leveraging power and economic structures that form the foundation of these platforms. According to the authors, platformization is a process in which the infrastructure of these platforms expands, driven by three key operations: (1) datafication, involving the collection and processing of behavioral data, with algorithms analyzing this data and making decisions that impact the network structures; (2) the reorganization of economic relations around two-sided or multi-sided markets. This relates to the evolution of markets within this domain, as platforms assume the role of mediators in economic connections, establishing rules and influencing these connections while also being influenced by users; and (3) finally, and perhaps most significantly, there is the issue of platform governance. Through algorithms and affordances, these platforms exert influence over user interactions, creating conflicts with regional laws and customs and giving rise to problems such as disinformation and violence. Consequently, platforms not only host and shape social connections and interactions but also operate according to their own set of rules, amplifying their power.
These operations are crucial for understanding how platformization influences the dissemination and legitimization of certain discourses over others within each society. Algorithms, for instance, play a pivotal role as one of the key components of a platform’s technical structure. Algorithms can be utilized to recommend content, connections, or groups for users to engage with, thereby directly influencing the social structure and the circulation of content within these platforms. Due to the need for monetization, platforms employ algorithms to present users with popular content that aligns with their connections or generates higher engagement. Consequently, these algorithms can contribute to the formation of echo chambers, restricting the diversity of content accessible to individuals.
The concept of an echo chamber refers to a phenomenon where individuals primarily encounter content within a specific group that reinforces their existing views (Sunstein, 2018). Echo chambers tend to create environments where core beliefs are seldom challenged, and dissenting content is swiftly silenced by the group. Bruns (2019) argues that echo chambers are groups that exhibit a preference for connecting with like-minded individuals. He also highlights a related concept that directly relates to this phenomenon: filter bubbles, a term coined by Pariser (2011). Filter bubbles describe the mechanisms by which search engines and recommendation and personalization algorithms inadvertently reinforce polarization. According to Bruns, these filters are a direct outcome of the algorithmic influence on content circulation within platforms. However, the author maintains a critical perspective on how these filters operate, contending that there is limited evidence to suggest that diverse content fails to circulate. Instead, people tend to “frame” different content within the context of their existing views. While acknowledging the partial connection between social media platforms and the increasing polarization observed in societies, the author, along with other researchers (Bail et al., 2018; Garimella & Weber, 2017), emphasizes the need for more robust evidence to support this claim. Although algorithms can increase visibility of a certain content (sometimes, violent one) and give the impression these discourses are universally accepted, while, mostly, they are only circulating within a certain group.
These structures, therefore, have an impact on how discourses are produced and disseminated. boyd (2010) explains that social media possesses distinct affordances that influence what she terms “networked publics”—the space where the public interacts and connects. Her perspective reveals that social media tools rely not only on their specific affordances but also on how they are appropriated and utilized by the public. The affordances can be summarized as follows:
a. Persistence: Published content remains in the tool indefinitely or until it is deleted by someone.
b. Replicability: Content can be easily copied and shared with other audiences.
c. Scalability: Content can be rapidly shared multiple times, reaching an increasing number of people in a geometric progression.
d. Searchability: Content can be searched, found, and shared again.
The dynamics of these affordances, which emerge from how these publics use and create meaning through social media, include the following:
a. Presence of invisible audiences: Referring to the lack of awareness, by content publishers, regarding their specific audience. Similar to a theater, individuals can only see or imagine part of their audience, but as content is shared and interacted with, it reaches different audiences.
b. Collapsed contexts: Signifying the absence of clear boundaries between various interaction contexts, making it challenging for people to interpret and understand them.
c. The blurring of public and private spaces: Connected to the previous dynamic, this refers to the difficulties in perceiving the context of conversations and social interactions.
So, we argue that the structure and appropriation of social media platforms influence discourse and discursive violence. Through these platforms, social media also allows people to produce, share, and legitimize discourses and, furthermore, different types of violent discourses and symbolic violence (Recuero, 2015). The notion of “networked publics” is important not only because social media platforms create a “public” space, but also because this space is connected to a network of people who interact and create connections and affiliations through these tools. Given this context, social media platforms play a role in spreading violence through discourse. Alrawashda (2020) argues that the issue of symbolic violence and discursive violence on the internet and social media has become one of the great challenges of our time. As we will further explore, this happens because the infrastructure and economy of platforms generate a field where violent discourses are quickly spread and legitimated, creating further toxic discourses.
Toxic Discourses: Toward a concept
In this section, we will further discuss the concept of toxicity. We will explore its current usage in literature and how it can be understood as a product of discourse platformization, along with the necessary dimensions associated with these ideas. We will argue that toxicity is a characteristic of platformization and manifests through three key aspects, or rather, elements that social media platforms support and which potentializes the negative effects of violent discourses: spread, legitimation, and harm.
While there have been limited discussions regarding the meaning of toxicity and its relation to discourse, various authors have referred to different forms of violent discourses. Online toxicity has been cited by several authors as an important element, sometimes synonymous with hate speech (Pascual-Ferrá et al., 2021), and other times as a dimension of hate speech that remains underexplored (Kim et al., 2021). Some authors define toxicity as “negative online interactions,” while others associate it with derogatory and rude discourses (Pavlopoulos et al., 2020).
However, a clear concept of toxicity in discourse is challenging to find in most works that mention online toxicity. What most authors agree on is that toxicity is perceived as an effect of discursive violence. For instance, Redkozubova (2023) views toxicity as the capacity to inflict harm through discourse, while other works consider it as a disruptive discourse with detrimental effects, such as conversation derailment (Saveski et al., 2021) or the spread of violence (Massanari, 2015). All of these authors have in common the idea that toxicity is something that goes with violent discourses, and which causes something. This is a key aspect to our argument, as we will also argue that toxicity represents a specific dimension of the platformization of violent online discourses.
Understanding the nature of how discourse changes with platformization (van Dijck et al., 2018) requires considering the characteristics of platforms. First, platforms exert control over discourse through their architectural design, which prioritizes interaction and economic benefits for companies. As a result, platforms shape discourses by promoting the values that emerge from their affordances. For example, platforms prioritize visibility as a crucial value (Myles & Trottier, 2017). Visibility allows others to engage with content and share and react to it. As discourse spreads through the scalability, replicability, and shareability of content (boyd, 2010), it also facilitates the legitimation of violence. To gain visibility, users may employ various tactics in their interactions, attempting to achieve virality. Humor, for instance, can be utilized as a strategy to amplify the visibility of violent discourses (Recuero & Soares, 2013).
Toxicity is a characteristic inherent to mediated discourses that is directly influenced by the structure of platforms and their appropriation by society. While most attempts to define toxicity aim to qualify these discourses, they often overlook an essential dimension of discursive violence occurring through platforms: the extent to which it spreads and contaminates other discourses. Therefore, toxicity represents the degree to which a particular discourse can spread and negatively influence other discourses, a potential amplified by the technical infrastructure of platforms. This dimension is crucial to understanding the actions and discourse facilitated by social media platforms.
The first of the most important element of toxicity is its spread. Social media platforms allow for rapid dissemination through features such as likes, shares, and retweets. This dimension aligns with what boyd (2010) referred to as “replicability” and is directly connected to the ability of these discourses to be swiftly replicated. It is also a consequence of the platform’s architectures and algorithms (Poell et al., 2019).
Twitter is often a good example of how these discourses can spread. During the 2023 FIFA’s women’s soccer World Cup, for example, several tweets depicting women’s sports under some derogatory comments were quickly spread among users, often disguised as “personal opinions.” One of the most shared tweets, during one of the games, said: “It is my opinion that women’s football is horrible, not comparable to men’s, it is a torture for the spectators. Am I allowed to express my opinion?.” Another one criticized a female player because she was using makeup. The comment accused the player to try to be attractive (to men) and claimed she used “to show off.” These tweets had more than 500 retweets in a few hours and were reproduced by the audience’s audience much more times. They clearly try to stereotype women and indirectly reproduce the idea that women’s soccer is not as good as men’s. As they are quickly reproduced by the audience and spread through the platform, they allow the gender violence to spread and contaminate other discussions about the same games. This is very similar to Kavanagh et al.’s (2019) observation on how women players were targeted on social media platforms during Wimbledon Tennis Championships. Toxicity, thus, is connected to the fact that these discourses are quickly spread and seem like a majority’s opinion to the audience.
Another significant element of toxicity is its legitimation. Social media platforms enable the authorization process by determining which accounts are authentic and therefore reliable. When these accounts produce or share violent content, they contribute to the toxicity by legitimizing violence through their perceived authority. For instance, Maarouf et al. (2022) examined the factors contributing to the viral spread of what they termed as “hate speech.” According to their findings, the potential for certain hateful content to spread is influenced by the visibility and credibility of the authors. Thus, the level of “authority” granted by platforms affects both the circulation of content and the degree of toxicity. This suggests that platform structures, in assigning verification or authority marks to users, confer greater power for content circulation, as argued by van Leeuwen (2007) and van Leeuwen and Wodak (1999).
One example here is how the U.S. ex-president Donald Trump used his social media accounts to question the election’s result and incite violence after he lost. His famous tweets after the results were out, “Statistically impossible to have lost the 2020 Election,” and further, “Big protest in D.C. on January 6th,” from December of 2019, are connected to the invasion of the Capitol, and he was banned from these platforms. Trump’s posts had an important impact as he was the current president of the United States and that was the position in which he was contesting the election. Thus, his discourse had legitimation from the authority position that he held and further, a stronger impact both online and offline in the spread of violence (AlBzour, 2022). Legitimation is thus connected to the fact that toxic discourses are legitimated by likes, shares, authorities, and influencers and are quickly connected to the perception of truth.
Both elements of toxicity happen because of platforms’ infrastructures which allow violence to spread, such as the affordances for likes, reposts, and retweets, the “shareability” of content (boyd, 2010). As violent discourses are shared by the audience, the number of likes, retweets, and shares, for example, can indicate the level of support it receives from the audience. As we previously argued, audience acceptance serves as a form of legitimation.
While not all toxic discourses constitute hate speech, the legitimization of such content introduces another dimension of impact on violence. Bourdieu (1991) wrote about how the legitimation of violence through language can naturalize oppression. Social media platforms facilitate the rapid legitimization of violence through authorization signals from authorities. The creation of authorities and the encouragement to share content that gains their legitimacy or to gain visibility are characteristics of the interactions between platforms and society (Poell et al., 2019).
Finally, another crucial element of toxicity is, of course, harm. Discourse has the power to produce harm and negatively influence conversations, perceptions, and individuals, as we discussed in the earlier part of this work. Toxic discourses can poison conversations, leading to silence, division, and derailment while often legitimizing domination and oppression. They can also impact offline relationships, creating unsafe environments and damaging interpersonal connections (Patchin & Hinduja, 2017). Harm can also be inflicted through symbolic violence, as it legitimates oppression and power dynamics, such as racism (Keum & Miller, 2018). Toxic discourses, therefore, constitute an important dimension of various forms of online violence, as they have the capacity to negatively impact societal relations and create the impression that “everyone agrees” with certain ideas.
On both our previous examples, harm is present as, in the first one, it reinforces the gender domination of a certain sport as the only valuable and depicts women as less capable than men, reinforcing societal dominance; as in the second one, the violent content is quickly spread and sparkle more offline revolt.
These three elements—harm, legitimation, and spread—characterize what we refer to as toxic discourses, which are directly connected to platformization and their influence on conversations. In the following section, we will delve into the effects of toxicity on online discourse.
Toxicity Effects of Violent Discourses on Conversations
Toxicity, as we have argued, has negative effects on conversations and the platforms themselves. Toxic discourses contribute to the polarization of public conversations, fueling violent interactions and rapidly spreading their negative impact (Milačić, 2021; Saveski et al., 2021). This polarization can be detrimental as it diminishes the vitality of the public sphere, creating filter bubbles and echo chambers where individuals primarily engage with like-minded individuals who share similar perspectives (Barberá, 2020). Consequently, the diversity of accessible content decreases, or different content is framed in a way that aligns with a single dominant discourse. Recent research by Suarez Estrada et al. (2022) examining violent tweets revealed a critical effect: an increase in affective polarization. Affective polarization refers to the phenomenon where individuals develop positive sentiments toward their own political party and negative sentiments toward individuals from opposing or different parties. This poses a problem as affective polarization can breed distrust and animosity toward others, influencing perceptions of adversaries’ public policies (Druckman et al., 2021).
Polarization is also closely linked to political radicalization (Barberá, 2020), which can escalate offline violence. Studies connect political violence with extremist rhetoric, providing insights into these effects (Pickard et al., 2023). This type of radicalization further divides public spheres of discussion, silencing dissenting voices and blocking content that challenges prevailing views within the debate. In addition, radicalization can fuel offline consequences, such as violent protests (Evangelista & Bruno, 2019). Thus, online toxicity has the potential to incite and exacerbate offline violence. Trump’s usage of social media platforms to insufflate violence, discussed by AlBzour (2022), is a clear example of this. Another example is presented by Chaudhry and Gruzd (2020) as they show how social media platforms can create “safe spaces” for a vocal minority to express racist discourses. In these cases, social media platform’s structure enables silencing as shared discourses circulate more.
Toxicity is highly contagious, as demonstrated by Kim et al. (2021), who revealed how violent comments on Facebook can alter the tone and amplify the spread of violence in subsequent interactions, fueling behaviors and fostering further toxicity. This escalation in toxicity can also generate doubt and fear, poisoning discussions and at times impeding the circulation of important information. Pascual-Ferrá et al. (2021) discussed how heightened tensions had detrimental effects on the dissemination of content related to COVID-19, exacerbating fear and amplifying the circulation of problematic content, including disinformation.
Moreover, toxicity has the capacity to silence discourses. As toxic environments erode trust and social cohesion, they legitimize hate speech and other forms of unsafe debates. When conversations become toxic, individuals may lose trust in the platform, fellow participants, or the information being shared. This erosion of trust can fragment communities, impede cooperation and collaboration, silence crucial voices, and marginalize certain discourses. The consequences of such dynamics extend to both the platforms themselves, as individuals reduce their participation and lose faith in the shared discourses, and to the public sphere, as problematic views and extremism are disseminated and legitimized. Hampton et al. (2014) have discussed how people felt less comfortable to discuss certain issues online if they felt the audience could respond aggressively if they didn’t agree with them. Similarly, Burnett et al. (2022) called this process “self-censorship” as people are silenced if they feel that others are a majority and disagree with them online.
Not all forms of discursive violence are explicit, as discussed in the first section of this article. Symbolic violence manifests in various ways, including passive-aggressiveness and other forms of oppression. Harrington (2021), for instance, describes “toxic masculinity” as an outcome of the perpetuation and legitimization of gender stereotypes and hierarchies. Lecompte-Van Poucke (2022) similarly examines “toxic positivity,” whereby the discourse of relentless positivity can be oppressive due to power relations and the obligation to always feel good. These discourses often present an exaggerated perspective and impose a sense of conformity on others, creating an expectation that they too must share the same feelings to fit in, thereby perpetuating symbolic violence. Through social media platforms and their rapid legitimation of online discourses, symbolic violence can disseminate models of toxic behaviors. Thus, toxicity is a characteristic of the platformization of violent discourses, both explicitly aggressive or not, and it can spread and generate negative effects on the public discussion.
Conclusion
In this article, we aim to define and explore the concept of toxicity as an outcome of the platformization of discourse using a theoretical critical discourse analysis approach. Our objective was to illustrate how current definitions of violent discourses often overlook toxicity, which is an inherent dimension shaped by the structural and economic affordances of platforms. We engage in a discussion regarding how toxicity emerges as a consequence of the spread, harm, and legitimation of violent discourses, dimensions that are crucial to comprehend within the context of platformization. Furthermore, we examine the potential impacts of the platformization of violence, including toxicity, on online conversations.
The toxic discourse is, thus, the violent discourse that has the three elements we previously stated (harm, spread, and legitimation) and that is enabled by social media platforms’ infrastructures and algorithms. Toxicity is, further, a problem that is much aggravated by the spread of violent discourses on these tools.
While this article primarily offers a theoretical examination of the concept, it contributes by systematizing and highlighting the importance of understanding the effects of platforms on society. Moreover, it initiates a significant discourse on how online violence, which may not always be explicit, is facilitated by the logic of platforms and can have negative consequences for both groups and individuals. Further research is necessary to enhance our understanding of these effects, enabling governments and policymakers to develop a comprehensive understanding of how platforms should address violence, particularly discursive violence.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was partially supported by the National Council for Scientific and Technological Development (grant nos. 406504/2022-9, 405965/2021-4, and 302489/2022-3).
