Abstract
Utilizing the data collected from 40 in-depth interviews, this study explores: How do users perceive social media platforms’ responsibility in designing algorithms? What do users perceive as diverse or similar in the content generated by algorithmic recommendation systems? The analysis discusses and evaluates the tension between (a) how the platform’s algorithm feeds users similar videos that they highly appreciate and, inversely, (b) how the recommendation of similar videos might limit the diversity of content to which the user is exposed. The analysis adopts a semio-ethic framework to understand why algorithmic platforms like TikTok are perceived to be so efficient in promoting an apparent perception of inclusivity while deliberately erasing alterity and promoting universal sameness. Although videos recommended by TikTok might appear to satisfy computational criteria of diversity, the outcome masks the absence of algorithmic pluralism. The algorithm generates socially desirable videos to allow users to feel comfortable in their in-group. In other words, recommended videos perpetuate a digital form of conformism in a conscious attempt to create the illusion of a more plural community. Advancing the study of algorithmic pluralism is therefore crucial to evaluate the extent to which plurality is understood by users, and what assumptions and ethics underpin the cultures that foster algorithmic recommendation design.
TikTok has been making headlines—but mostly for less than desirable reasons. The social media platform is facing increasing scrutiny over privacy and data security concerns (Wall Street Journal, 2021a; Wired, 2021). TikTok does not elaborate on the criteria for which specific videos are recommended and what kind of private data are retained to match videos with a user’s interests (Wall Street Journal, 2021b). However, TikTok has unquestionably grown to become a popular platform, particularly among young people for its culture of inclusivity (Financial Times, 2020). Several of the most popular hashtags promoted by TikTok, such as #AllTheDifference, are aimed at celebrating the diversity of content creators (TikTok, 2020). Furthermore, TikTok has become the leading digital platform to document anti-racist demonstrations and to express solidarity with Black Lives Matter (CNN, 2020). The hashtag #BlackLivesMatter reached 28 billion views on the platform as of August 10th (TikTok, 2021).
TikTok adopts an algorithm that is considered among the most sophisticated recommendation systems in shaping individual experience and social interactions (The New York Times, 2020). Traditional social media rely primarily on users’ active online behaviors (e.g., liking, clicking, or following) to determine user preferences. TikTok captures our entire range of behavioral patterns while we watch videos and uses those patterns to train its algorithms (Wired, 2020a). These behaviors include the number of times we allow a video to repeat, how rapidly we scroll over certain information, and whether we gravitate toward specific types of lyrics or ‘effect filters.’ As a result of this recommendation system, TikTok users can be passive, if they so choose, while still receiving an engaging, personalized content feed far more quickly than they would on any other social media platform (Wang, 2020).
Furthermore, TikTok’s algorithm seems to be very effective at predicting our interests even when we do not express them explicitly. According to the Wall Street Journal (2021b), TikTok is more accurate than other social media in learning about users’ desires and feelings and then uses that information to push similar content that seems to be difficult to escape. Once the algorithm has identified the user’s interests, it will automatically alter the feed to contain only videos that are relevant to said user’s preferences. Consequently, it is no longer necessary for a social media platform to focus on connecting more people. TikTok completely bypasses social networks by connecting individuals directly with their interests. Social media platforms based on this algorithmic peculiarity raise ethical concerns because they remove the possibility of direct personal and social engagement, transferring agency from the user to the algorithm.
Another significant challenge is that TikTok’s algorithm may be more susceptible than traditional platforms to reduce the diversity of content that users consume on the platform. In a highly visual social media platform such as TikTok, it is not surprising that the algorithm can filter similar profiles based on a person’s appearance, their body type, or what they are wearing (Wired, 2020b). In 2020, Marc Faddoul, an artificial intelligence researcher at the Human Rights Center, UC Berkeley, discovered that TikTok was proposing accounts to him with profile images that matched the race, age, or other traits of the ones he had previously followed (Vox, 2020). Although this evidence is anecdotal, it underscores the essential ethical considerations concerning the hazards of visually based recommendations. The use of visual similarities to generate recommended videos is an ethical issue because users may perceive these similarities as spontaneous or recognizable.
From a semiotic perspective, visually driven recommendations on TikTok can pose problems. The algorithm favors content that is visually similar to that with which users have previously engaged, instead of prioritizing content that challenges or expands interests (Bhandari & Bimo, 2022). This can create a “visual bubble,” limiting exposure to diverse perspectives and content, as users are presented only with similar videos (Boeker & Urman, 2022). Moreover, TikTok’s proprietary and opaque algorithm makes it difficult for users to understand how it works or why certain videos are recommended to them (Kaye et al., 2022). This lack of transparency can lead to ethical concerns, particularly if users feel like they are being fed very relevant and appealing content that they find enjoyable. Therefore, while algorithmic platforms like TikTok may promote an apparent perception of inclusivity, this can often come at the cost of erasing alterity and promoting a universal sameness. To avoid these issues, it is important to prioritize diversity and pluralism in the design and implementation of these algorithms.
Filter bubbles (Pariser, 2011) and echo chambers (Sunstein, 2018) have been used to describe the phenomenon of users being exposed only to opinions and ideas that align with their own, resulting in a skewed perception of reality (McEwan et al., 2018). The concept of filter bubbles has been widely discussed in academic literature (see Ross Arguedas et al., 2022), with scholars examining its impact on democracy, public discourse, and political polarization (Beam et al., 2020; Terren & Borge-Bravo, 2021). This algorithmic filtering occurs when recommendation systems used by social media platforms prioritize content that confirms an individual’s existing beliefs and values while downgrading or suppressing content that challenges or contradicts them (DeVito, 2017).
In recent years, the issue has gained even more attention in popular media, particularly in the context of Facebook, wherein the app’s algorithm has been criticized for promoting content that reinforces users’ preexisting biases and worldviews (Goldberg, 2021). Critics argue that filter bubbles can lead to a lack of diversity in information and viewpoints and exacerbate existing societal divisions and political polarization (Kligler-Vilenchik et al., 2020). On the other hand, proponents argue that recommendation systems can provide users with a more convenient and relevant experience (Bodó et al., 2019) and help them discover new ideas and perspectives they may not have encountered otherwise (Helberger, 2019; Möller et al., 2018).
The impact of algorithmic sociality is recognized as a complex and multifaceted phenomenon (Sujon, 2021) that has three ethical implications for algorithmic recommendation systems: first, algorithms may undermine individual agency and autonomy (Milano et al., 2020); second, the collection and use of personal data by companies to create personalized content may also raise privacy concerns (Parisi & Parente, 2021); third, algorithms may contribute to the digital divide by perpetuating existing inequalities (Hoffmann, 2021) and limiting the access of marginalized individuals to diverse information and perspectives (Gran et al., 2021). These ethical concerns make TikTok’s algorithm particularly interesting to study, especially considering its remarkable success among young users (Bloomberg, 2020). Young users may be vulnerable to the influence of algorithmic recommendation systems due to a lack of media literacy skills (Livingstone, 2015) which make it difficult for them to understand the ethical implications of these algorithms.
The goal of the study is thus to understand the extent to which young users perceive diversity in algorithmic content selection and the ethical responsibility of social media platforms in their curation of content. A persistent problem in the debate around the ethical risks posed by algorithms is the “impact of personalized recommendations on the realization of media and information diversity” (Helberger, Karppinen, & D’Acunto, 2018, pp. 191–192). However, the concept of information diversity has been used descriptively in the current literature to emphasize the existence of heterogeneous content without examining its ethical and semiotic value. In this perspective, diversity is to be constructed with the users who are the subjects of a cognitive-interpretive process. Furthermore, very little qualitative research has been conducted to gauge the extent to which young users are aware of the complexity of ethical problems raised by the adoption of algorithmic recommendation systems (DeVito et al., 2017; Duffy & Chan, 2019; Liao & Tyson, 2021), and only a small number of studies to date have provided TikTok users with the opportunity to voice their experience (e.g., Klug et al., 2021; Siles et al., 2022; Simpson & Semaan, 2021).
The methodology draws on 40 in-depth interviews conducted with international students, between 18 and 24 years old, who were residing and studying in the Netherlands in the spring 2020. While TikTok’s video recommendations may appear to meet computational diversity criteria, the algorithmic outcome conceals the absence of true pluralism. The algorithm generates socially desirable videos to acclimate users to their ingroup. The study seeks to explore the following two core questions: How do users perceive social media platforms’ responsibility in designing algorithms? What do users perceive as diverse or similar in the content generated by algorithmic recommendation systems? Specifically, the analysis discusses and evaluates the tension between (a) how the platform’s algorithm feeds users similar videos that they highly appreciate and, inversely, (b) how the recommendation of similar videos might limit the diversity of content.
This research contributes to the field of study of the impact of algorithmic visual personalization on social media in three ways. First, it proposes a semio-ethic perspective to facilitate a more rigorous analysis of recommendation systems. Second, the research frames different algorithmic recommendation outcomes in a semio-ethic context. Algorithms are generally considered to be single entities that are immutable, external forces. However, this idea is a reductive representation of reality (see Narayanan, 2022) that limits our ethical understanding of the material outcomes of algorithms. For instance, investigating recommendation systems involves measuring user exposure to diversity and similarity (Möller et al., 2018), but we should also understand how ethical principles should be involved in the process of design (Milano et al., 2020; Mittelstadt et al., 2016). The third and final contribution of this research is the development of a novel approach to evaluate the ethical responsibility of social media through the experience of social media users.
Theoretical Framework
The field of algorithmic ethics is often seen as a critique of the impact of recommendation systems on society, with the goal of determining whether a given algorithmic technology is “good” or “bad.” This approach to ethics tends to oversimplify the complex interplay between technology and society as most technologies have both positive and negative aspects (Rosenberger & Verbeek, 2015). In addition, the implementation of technologies is often determined by factors outside the control of regular users, even if a particular technology is seen as “bad.” Current work (Bucher, 2012, 2018) frequently advocates conceptualizing algorithms as dynamic semiotic structures with performative characteristics. The relationship between our self-awareness and our engagement with algorithms is reciprocal, with each influencing the other. Therefore, the interdependence between subjectivity and objectivity highlights the idea that a user is not simply a passive subject. Instead, both our sense of self and our perception of the world are shaped by the dynamic interplay between individual experiences and algorithmic technology.
It can be argued that recommendation systems are inherently political as they construct a reality including certain interests and excludes others (see Hoffmann, 2021). This is not always a deliberate or conscious decision made by designers who are often focused on solving technical problems. However, in the process of constructing technology, designers invariably make assumptions and presuppose certain values and beliefs (see Introna, 2007, 2017). While users may be aware of the algorithmic intervention and reinterpret the outcome to suit their own needs, it becomes increasingly difficult to use recommendation systems in ways other than those intended as it becomes embedded in social media platforms (Ettlinger, 2018). If the design of algorithms is considered a political matter, then it becomes immediately relevant to ethical considerations, as the construction of interests and values within technology and practices bears ethical significance.
This approach underlines this semio-ethic relationship between algorithms and users and also acknowledges the material and computational aspects of algorithms as well as the capacity of individuals to make sense of and interact with the content presented to them by the algorithms (Gillespie, 2014). Therefore, the TikTok experience cannot be subjected to criticism with a sole focus on the material outcomes of the algorithms, ignoring the broader aspect of semiotic processes (semiosis) produced by users. The role of users attempting to find significance in their lives and define themselves meaningfully must “exist in a horizon of important questions” or “shared concerns” (Taylor, 1992, p. 44). Thus, when an algorithmic process cannot be fully understood, it becomes necessary to explore the ways in which individuals interact with these systems and the extent to which these interactions influence their views on privacy, autonomy, and justice in digital media.
Empirically examining how people think, feel about, and act on algorithmic pluralism on their own terms may serve as a catalyst for public awareness and debate around the role and ethics of algorithms in general in shaping our society (Lomborg & Kapsch, 2020). For this reason, the semiotic process (Beer, 2017) of users interpreting the pluralism of the algorithm cannot be ignored if we want to hold social media platforms accountable for the flaws in its design.
In this section, I aim to present the theoretical framework in a clearer manner. First, I aim to highlight the ethical connection between algorithms and users by introducing analytical concepts that help us comprehend algorithmic pluralism. While “algorithmic diversity” and “algorithmic pluralism” are closely linked, they are not interchangeable terms. Therefore, I propose the adoption of “algorithmic pluralism” here, which avoids a normative definition that limits content diversity to individual satisfaction. Instead, it is defined by the enhancement of social and cultural diversity through the use of diverse algorithms. This approach acknowledges the ethical implications of algorithmic decision-making and considers the importance of diverse perspectives and values.
Second, I aim to address the problem of responsibility through a semio-ethic perspective that focuses on the ethical responsibility of social media platforms. Semio-ethics emphasizes the relationship between identity and otherness. This perspective is beneficial because it highlights the mobilization of our semiotic and ethical knowledge during interactions with algorithmic pluralism. It encourages a more responsible use of social media platforms, recognizing the potential impact on individuals and society at large.
Algorithmic Pluralism
Diversification in algorithm design, which involves the inclusion of unexpected or uncommon items to disrupt similarity, is deemed to be a critical component (Möller et al., 2018). This design principle is intended to mitigate the filter effect of algorithmic recommendation systems and ensure that users are exposed to a wider range of content (Pariser, 2011). For this reason, diversity and similarity are mostly modeled and conceptualized at the input level by computer scientists (Möller et al., 2018, p. 961). However, other authors seem to conceptualize diversity in different manners: source diversity, topic diversity, diversity in terms of the demographics of users, and so forth (see Joris et al., 2020 for a review). Nonetheless, according to this fragmented body of literature, diversity in recommendations has an instrumental purpose in improving users’ experience (Willemsen et al., 2016).
Thus, the definition of diversity in algorithm selection is more contentious than it appears, especially if it neglects to view the algorithms in the particular social and cultural contexts in which they work (Milano et al., 2020). A large body of critical literature sheds light upon the black-boxed character of proprietary algorithms (Pasquale, 2015); the social discrimination that results from biases within algorithmic models and which is, thus, produced by technology (Noble, 2018); and the subtle kinds of power that algorithms exert over users (Lomborg & Kapsch, 2020; Sujon, 2021). These inquiries inspire some central questions: What responsibility do social media platforms have in designing algorithms to maximize such diversity? What norms underlie any conceptualizations of diversity? How can diversity support openness and engagement with the world and others?
Diversity, as it is understood in the computer sciences (Blikstein et al., 2014), involves incorporating multiple perspectives and sources of data into the development of algorithms and ensuring that these systems do not reinforce existing biases or stereotypes. This perspective encounters a series of epistemological and conceptual constraints, in which the simplistic application of a set of inputs ignores the complexity of lived experiences and the underlying ethical dilemmas. While the literature uses the term “algorithmic diversity” to describe the selection of diverse content (Helberger, Karppinen, & D’Acunto, 2018), it does not explore the ethical and semiotic implications of this variety in algorithmic personalization. In this sense, diversity may increase exposure to a wide range of ideas, perspectives, and styles but does not necessarily promote a pluralistic view that acknowledges and includes multiple perspectives, values, and beliefs (Moon, 2012).
While algorithmic diversity and algorithmic pluralism are related, they are not the same concept. A recommendation system can be diverse without necessarily being pluralistic if it does not allow for the inclusion of different perspectives or for users to express their own preferences. Similarly, a recommendation system can be pluralistic without necessarily being diverse if it does not offer a broad range of content or if it is biased toward certain types of content. This conceptualization of algorithmic pluralism can help to broaden and deepen our understanding of ethical issues and facilitate more informed and nuanced moral reasoning.
To address the limitations of the current literature on algorithmic diversity (see Joris et al., 2020), I therefore suggest adopting the concept of algorithmic pluralism. This new concept moves beyond a simplistic notion of diversity, which focuses only on the inclusion of a variety of content (Hoffmann, 2021), and instead emphasizes the importance of acknowledging and including multiple perspectives, values, and beliefs in algorithmic decision-making. In a pluralistic society (Berlin, 2003), diverse groups can work together effectively and respectfully to achieve their goals, working toward the peaceful coexistence of diverse interests, convictions, and lifestyles (Walzer, 1997).
In addition, pluralism in ethics (Ess, 2006) recognizes that there are multiple, equally valid values with no clear hierarchy of priority that can conflict with one another. Therefore, algorithmic pluralism can play an important role in promoting ethical understanding by providing individuals with opportunities to experience different perspectives and moral values. When recommendation systems are not transparent and their decision-making processes are hidden, it can be difficult to identify instances where bias has been incorporated into the system which may lead to unfair and unjust outcomes (Hoffmann, 2019). Thus, it is crucial to consider the ethical implications of these systems to prevent the perpetuation of biases and to uphold the principles of justice (Floridi et al., 2020). Overall, the adoption of algorithmic pluralism can contribute to enhancing social and cultural diversity and promote a more inclusive and just society (Sax, 2022).
The relationship between algorithms and normativity is crucial in all stages of their existence, from design to implementation and usage (Krasmann, 2020). In the case of recommendation systems, algorithmic pluralism should involve designing systems that can accommodate various types of users with diverse preferences and interests, allowing them to express their preferences and make choices. Therefore, algorithmic designers must consider the social and cultural impacts of their content curation practices and encourage diverse content. This pluralistic approach is essential for social media platforms to acknowledge their ethical responsibilities when developing and implementing algorithms.
A Semio-Ethic Approach to Responsibility
There has been a significant increase in the amount of discussion about the ethical principles and ideals that should guide the development and deployment of recommendation systems (e.g., Hagendorff, 2020). The study of ethics applied to recommendation systems is relatively fragmented, yet there are several recurring principles (Hermann, 2022) that are of a high-order, deontological nature (Ess, 2020). Two principles can be appropriated in the definition of responsibility of a platform: “doing no harm” and “doing good” (Scalvini, 2020b). The aim of algorithmic recommendation systems and any application of AI should be to promote societal good (beneficence) while preventing any harm (non-maleficence) toward users. The use of deceptive tactics to gain new users or to manipulate the results of the feed are examples of maleficence, the potential harm posed by algorithms. These principles are essential for ensuring that social media platforms act in a socially responsible and ethical manner, with a commitment to minimizing harm and promoting positive outcomes for individuals and society as a whole.
The obligation to “do no harm” is a critical ethical consideration when it comes to algorithmic recommendations, as harm to individuals can occur either directly or indirectly. In contrast, the principle of “doing good” (beneficence) is a key principle that every social media platform should espouse. This obligation involves actively seeking ways to help individuals achieve a positive level of enjoyment from content or to avoid the distress caused by potential harms and risks (Scalvini, 2020b). The principle of “doing good” extends to the protection and promotion of people’s well-being at all levels, including personal, family, community, and society (Guttman, 2000). By prioritizing “doing good,” social media platforms can play a crucial role in fostering positive outcomes and enhancing the overall well-being of their users.
I argue that the implication of this deontological approach is problematic as it contains restrictions centered on agents. The assumption that users are unable to fully comprehend the impact of algorithms on their lives is challenged by the semiotic perspective which posits that users play a significant role in algorithmic recommendation systems (Matthews & Danesi, 2019) and are involved in an ongoing dialectic between human and non-human agencies (Lee & Björklund Larsen, 2019). This implies that users experience, use, and are part of algorithms (Parisi & Parente, 2021), making it imperative to explore how individuals reflect on the ethics of algorithms as they become ubiquitous in contemporary life. In this context, users have the ability to recognize actual differences, consider alternatives, defend unsuggested actions, and dissent from the outcome of the algorithm (Ananny, 2016).
Converting these ethical principles into operational guidelines can be problematic because it requires treating users as non-autonomous agents (Magalhães, 2018). In such an approach, there is always an element of social engineering that assumes a cognitive neutrality of users. Contrary to the idea that users are unable to truly understand the impact algorithms have on their own lives, semiotics considers that users always have a central role within an algorithmic recommendation system and that they are part of an ongoing dialectic between signs and human interpretation. Therefore, a theoretical approach driven by a “semio-ethic” perspective (Petrilli, 2016, 2017) might facilitate better analysis and design of algorithmic recommendation systems. While semiotics has demonstrated that everything that is human entails interpretation, semio-ethics extends this semiotic knowledge to ethical analysis.
The semiotic approach can be conceptualized as a process of signification, where users and technology are viewed as mutually constitutive interpretive contexts that render each other intelligible (Veltri, 2015). This approach involves the progressive uncovering of the conditions that are necessary for specific ways of signifying in the world and social practices to be perceived as meaningful and coherent (Eco, 1976, 1979). By adopting a semio-ethical perspective, we can examine how users rely on the production and interpretation of algorithmic content selection, and how these interpretations can either reinforce or challenge cultural and social norms. Therefore, semio-ethics is neither a distinct subfield of semiotics nor ethics. Rather, it appraises the capacity of individuals to engage in listening, critical thinking, deliberation, and taking responsibility for their actions (Petrilli, 2017). This perspective also emphasizes the responsibility involved in designing and using algorithms, and the importance of considering the ethical implications of their semiotic and symbolic dimensions.
In this article, semio-ethics is also used to foreground the relationship between identity and otherness. Emmanuel Levinas (1980) emphasized the transformative power of otherness, a concept related to what he termed “infinity” (p. 23) which can be connected to the concept of algorithmic pluralism through infinite interpretation and creation of meaning. The relationship between identity and otherness is important to consider in the context of these systems as they can either reinforce existing biases and prejudices or promote plurality and inclusivity. The concept of “plurality” can be related to the infinite interpretation and creation of meaning through algorithmic recommendation systems. Therefore, a semio-ethical approach stresses the importance of the user’s ability to listen to others, engage in critical thinking, and assume responsibility in the use of these systems, in addition to considering the potential impact on their own identity and the otherness of others.
A semio-ethic framework is relevant for understanding the ethical implications of algorithmic recommendation systems in social media platforms. These systems can have a profound impact on the way individuals perceive themselves and others, and the kind of information to which they are exposed. The semio-ethics framework is thus beneficial because it emphasizes the mobilization of our semiotic and ethical knowledge during the processes of interaction with algorithmic pluralism. This approach does not only address ethical issues related to algorithms and moral responsibility but also how users make sense of pluralism in algorithmic selection. In this way, the purpose of this study’s exploration of algorithmic pluralism is not just to hold TikTok liable for the flaws of design but to examine how the users perceive the algorithms, and how to resolve the dialectical tension between similarity and diversity.
To summarize, while ethics is conventionally understood as the work that involves discerning “right” actions from “wrong” ones, semio-ethics is, more precisely, a field of inquiry that invites us (1) to reflect on the dialectical tension between similarity and diversity when reflecting on algorithmic pluralism, and (2) to look at the moral ambiguity of digitally mediated representations and interpretations of diversity and pluralism. As such, semio-ethics does not provide clear answers about the best way to formulate solutions. Rather, it offers an opportunity to combine empirical research with normative-ethical analysis and critical reflection. A semio-ethic perspective is thus capable of considering ethical issues related to algorithms and the ethical responsibility of their creators, alongside how users make sense of algorithms.
Research Design
Interviews were conducted in spring 2020 with 40 young international adults who had been residing and studying in the Netherlands for 1–2 years. The interviewees (n = 41) were recruited on a college campus through a snowballing technique. Saturation was operationalized to be consistent with the research questions (Saunder et al., 2018, p. 1983). Interviewees ranged in age from 18 to 24 (M = 22.66) and predominantly European (n = 26, 65%) and North American (n = 6, 15%). The age reflects the largest demographic of users on TikTok in Europe and North America (Bloomberg, 2020; Statista, 2020). Significantly more interviewees were female, reflecting campus demographics (women = 24, 59%; men = 16, 41%). A large part of interviewees were active users (32, 78%) and used the app at least once in the 7 days before the interview, but only a smaller number (4, 10%) was regularly posting videos. A limitation of the sample is due to other demographic qualifiers, such as ethnicity, race, or sexual orientation, which were not screened although the perception of pluralism might be very different if most participants are from dominant or non-marginalized groups. Although interviewees come from different backgrounds, it was assumed that they would apply similar forms of ethical reasoning to the specific case of TikTok.
One important question when discussing algorithmic personalization with users is their awareness of recommendation systems. While in the case of TikTok, personalization processes may be relatively apparent (see how in Siles & Meléndez-Moran, 2021, participants initially describe the algorithm as “aggressive”), studies have found a different levels of awareness of the algorithm in comparison to other platforms (Gran et al., 2021; Gruber et al., 2021; Swart, 2021). However, given the increased public discussion of algorithms and the “techlash,” users are becoming more aware of this concept (Ytre-Arne & Moe, 2021), even if they may not be familiar with the term (Klawitter & Hargittai, 2018). According to Siles et al. (2022), users may not only understand algorithms on a cognitive level but also recognize their potential to facilitate social connections. To demonstrate this type of awareness, users employ specific practices, tactics, competencies, and skills. For these reasons, the interview protocol did not include any direct question about algorithms; however, the interviewees were asked to browse through their ForYou page and comment on the trending hashtags in a bid to obtain various experiences and perceptions of the role of algorithmic selection. The use of open questions encouraged interviewees to express themselves and include additional information freely.
Interviews lasted for 45–60 min and were semi-structured with open-ended questions. An interview protocol with a list of discussion points was written beforehand (Scalvini, 2020a: Appendix 1). To promote a culture of open scientific inquiry, the present study recognizes the value of open data for discouraging research fraud and permitting critical scrutiny. For this reason, the repository of the anonymized transcripts is deposited on Harvard Dataverse (Scalvini, 2020a) in Refi-QDA format. Certainly, the goal is to increase accountability and transparency, but also to encourage a new practice of open data in qualitative research by maximizing the value of the interviewees’ contributions and increasing diversity in analysis and interpretation.
The data analysis software Atlas.ti was used for managing and coding interview transcripts. Qualitative content analysis was applied to transcripts, specifically a hermeneutic-interpretative analysis (Kuckartz, 2014) was implemented using a deductive approach based on two main categories: “doing no harm” and “doing good” (Scalvini, 2020b). To ensure the validity of the coding process, three steps are followed (Long et al., 2006). The first step is based on the evaluation of the data to pinpoint ethical principles of beneficence or maleficence (Table 1: column 1). The second step refers to the data and investigates how representations of “harm” and “good” can be interpreted by interviewees (Table 1: column 2). The final stage focuses on the extraction of illustrations and instances of the identified categories (Table 1: column 3). Furthermore, to enhance the reliability of the study, two researchers with expertise in digital ethics and qualitative methods reviewed the interviews and identified any discrepancies within and across the coding process. They then discussed and refined the adopted categories to ensure consistency.
Categories Applied in Data Analysis.
For the purposes of the present research, the decision was made to ensure the anonymity of participants by assigning numerical identifiers, so as to avoid any associations with nationality or gender. It is critical to recognize that the interviewees for this study were international students residing in the Netherlands and that their identities and experiences could potentially impact their perceptions of the inclusivity of TikTok’s content and the enhancement of its diversity. It is important to acknowledge that this particular limitation of the study means that it may not provide insights into how group identity could be influencing perceptions of pluralism on TikTok.
To improve the relevance and breadth of research conclusions, future studies should include individual variations such as race, gender, class, and geographic location in their analyses. Researchers should investigate the extent to which individual differences and social factors, including intersectionality, influence our perceptions and evaluations of algorithmic pluralism. By doing so, we can gain a more comprehensive understanding of how algorithmic systems impact different individuals and communities.
Empirical Results: From the Perspective of TikTok Users
Overall, interviewees describe TikTok as a safe space where interviewees can be themselves and feel included in a community of people who are not seeking to promote any product but are only interested in posting content to connect and engage meaningfully beyond difference. Interviewees think TikTok offers an accurate representation of society due to the diversity exhibited in its content. TikTok is described as used by real-life people, not models or actors. Its genuinity is also something that stands out among TikTok’s social media siblings, an aspect that makes interviewees identify more with this type of content and therefore feel more invited to use the platform. However, interviewees perceive the ForYou page as manipulative. Specifically, they consider that the most ethical issue to be addressed is the role the algorithm plays in proposing content. The present section organizes the findings according to how users understand algorithmic pluralism, and how they are able to reflect on the dialectical tension between the two ethical principles of beneficence and maleficence.
Diversification in Recommendations
Interviewees discussed how TikTok is effective in implementing a strategy to bring diversification to their feed and give them better exposure to new videos, more talented creators, and different perspectives. There is consensus that diversity of content experienced through the ForYou page gives the impression of bringing people from all over the globe closer together. According to an interviewee (6:360): “TikTok is a platform that enables everyone and anyone to kind of just be themselves.” They therefore consider TikTok a safer space in which body positivity, mental health, and gender fluidity are discussed in a positive light. Interviewees consider there to be a significant number of alternative people or persons who differ from the dominant conceptions of beauty in the Netherlands, North America, or Asia (2:224; 9:117). One interviewee points toward the success of videos featuring people with disabilities on the platform. They further note: “[. . .] this might be to show the world that not everything has to be perfect, or your body can be different than people say it has to be” (2:224).
Interviewees agree on the fact that on TikTok, they can show themselves more naturally compared to other social media outlets. They also feel it is easier to identify themselves with this type of content because it is closer to how they lead their daily lives. Not one of them claims to desire the extravagant lifestyle of an influencer, and thus they admire the simplicity of the lives they see on TikTok. According to an interviewee, on other social media: “you’re hiding your flaws, on TikTok, you’re showing them off” (17:314). At the same time, they are very critical of social media such as Instagram and Facebook, because these platforms encourage content producers to create a conventional image of the self. Therefore, enhancing diversity is the feature that plays the most vital role in evaluating TikTok as morally “good.” Overall, interviewees argued that the unicity of TikTok lies in its focus on user-generated content and the raw, unfiltered nature of the videos. The short-form video format, which emphasizes storytelling and creative expression over static images on Instagram, may also contribute to this perception of uniqueness.
See Only What You Are Interested in
Interviewees find that TikTok promotes a safe space for people with different choices and tastes. Remarkably, the interviewees appreciate finding a large amount of content that promotes the inclusion of people from a wide range of ages, origins, skin colors, body types, gender identities, and sexualities. One participant agrees, stating further that, “you have people from all sorts of backgrounds, people from all sorts of body sizes, genders [. . .] without people receiving backlash, as commonly on other social media platforms like YouTube, Instagram, and Twitter” (Interview 38:329). Some interviewees argue they feel less alone on TikTok, while on other social media platforms, everyone feels the need to be perceived as “perfect.” Specifically, TikTok offers the opportunity to browse content from everyone around the world; you “do not just see the same ten people in your hometown” (Interview 38:329). Interviewees share the opinion that since TikTok users can see and discover people from all backgrounds and races, everyone can go viral regardless of where they come from or what they look like. Overall, TikTok has broken the typical norm of social media platforms in this regard, seemingly for the better.
A recurring example of what is considered morally good is the perception of equality, respect, and acceptance of differences toward known others in the TikTok community. Interviewees noticed that teenagers use the platform as an opportunity to come out, or “to even just support a cause or even just put it out there that you know they are acknowledging their orientations or just their personality” (6:360). One participant (38:329) commented, “[. . .] if people are kind of struggling with their gender [. . .] these kinds of videos can help them feel more accepted [. . .].” They see an opportunity in TikTok posts to raise public awareness for the acceptance of non-heteronormative sexual orientation. Overall, interviewees agree that TikTok videos encourage the expression of users’ sexual orientation and might help lessen alienation against people within the LGBTQA + community. Interestingly, reflection on their own identities came up significantly less than expected in the interviews. Participants seemed to be talking more about other users than themselves with only one exception.
Interviewees feel reassured by the idea of having found a social media platform on which so many users openly express themselves and embrace one another’s differences. This aspect apparently contradicts Simpson and Semaan’s (2021) findings, which have identified some episodes of identity-based harassment on TikTok. A possible explanation for this difference is that the sample interviewed in the present study was mainly composed of Europeans and North Americans studying on a Dutch campus. Therefore, TikTok’s algorithm tends to be more accurate in predicting users’ interests when the data points are unidimensional.
Similarity Through Repetitive Patterns
Interviewees have the impression that the ForYou feed tends to show similar or identical videos one after another, sometimes because the videos have identical sound, or they are created by users who are quite similar in their physical attributes. The ForYou page does not recommend identical content that users may have seen already. However, users very often receive suggestions for videos liked by users with similar interests as theirs. In this way, they mention that, although they enjoy watching challenges, videos can become repetitive when the algorithm repeatedly pushes for similar content. For instance, one gay respondent wonders why TikTok keeps proposing hypersexualized videos of “progressive,” shirtless, muscular males fighting homophobia or racism (33:33). This finding is in accordance with Simpson and Semaan (2021), specifically in the sense that TikTok creates a paradoxical space where the algorithm simultaneously supports inclusivity and reaffirms identification and yet transgresses and violates the identities of individual users.
Interviewees consider some of the recommendations feeding the ForYou page to be questionable because they aim at persuading or nudging in favor of particular hashtags and social causes. Specifically, participants define the exploitation of body positivity, gender fluidity, or mental health to generate traffic as an unethical practice. Most interviewees are also concerned that some content producers might exploit social issues to go viral. It is suggested that TikTok could make its method of generating personalized recommendations transparent to reduce the threat of violating their autonomy by providing them with details as to why TikTok recommends certain videos.
Certainly, interviewees are able to recognize the algorithmic intervention: “The algorithm recognizes that the content is not what I’m interested in” (Interview 10:274). At the same time, participants find the recommendations in the ForYou page intrusive, as they direct users in a specific direction by trying to get them “addicted” (Interview 36:6) to targeted content. Several interviewees mentioned that this endless personalized content and the subsequent compulsivity is the most addictive and fun part of the application. This aspect can make some users feel that the content is repetitive, but at the same time it keeps them engaged with the ForYou page. For this reason, interviewees agree that the content is not there to be inclusive, but rather to keep the users interested.
Binge Scrolling
Interviewees are afraid that TikTok’s features, such as the length of the videos, the scrolling feature, the music, and the matching algorithm, make the need to use the application more compulsive. By continually showing users precisely the content they want to see, it is tough for users to close the application or to be aware of the time they are investing in the application. This binge-consumption, as pointed out by one of the interviewees, can result in a feeling of guilt caused by the “wasted time” they spent on the application, rather than doing something more “productive” (8: 61).
The scrolling feature is cited as the main cause for impulsive behavior while interacting with the application, as one interviewee says, “you can scroll and scroll and there is no end to it” (Interview 16:313). Unlike Facebook and Instagram, content on TikTok is never-ending, since the algorithm is always going to forward new content to its users, and by having such a simplistic way of working (scrolling), people lose track of time (36:158). One interviewee describes this: . . . when you watch one video, you want to watch the other one, and another one, and another. It is so easy to watch it, you just scroll down. Also, as one video is over, it automatically goes to the next one. I also think it can be addictive to make the videos if you get a lot of likes and comments, you get encouraged to make another one. (2:212)
The length of TikTok’s video content was also discussed. Interviewees agree that having access to so much content in such a small amount of time, watching videos and getting hooked is very easy. One interviewee explains the “addictiveness” deriving from a typical TikTok video’s length: “videos in 15 to 60 s all have an introduction, a middle and a conclusion, allowing you to watch it and go to the next and the next and the next. . .” (Interview 14:216). Since videos have everything needed to be catchy in such a short period of time, the user is not aware of the amount of added time that can pass while watching a hundred TikTok videos. The brevity of the videos makes it very hard for the user to lose concentration.
The binge-scrolling bears similarities to critiques of gambling culture which also revolves around the idea of getting hooked and losing track of time. Both TikTok and gambling industries rely on addictive mechanisms to keep users engaged with the goal of increasing revenue and user retention. The short duration of TikTok videos and the ease of scrolling through an endless feed of content is akin to the speed and immediacy of gambling activities that offer instant gratification and reinforcement (Zolfagharian & Yazdanparast, 2017).
Vulnerabilities and Lack of Transparency
Interviewees are aware of the many vulnerabilities of TikTok that have been discussed in the news. These concerns are addressed in the interviews when discussing the nature of the video content that appears on the platform. For instance, interviewees express concern about the safety of minors because in the news they read about the presence of pedophiles and sexual predators (Interviews 7, 8), which makes the platform a dangerous place for many of its users. Furthermore, oversexualized content and potentially dangerous challenges are highlighted as problematic for underage users: There has been a bit of bad press about TikTok; there was a television show that said that pedophiles or people with less good intentions are also able to go on TikTok and they are commenting on younger kids their TikToks, so what do you think for example is essential when a younger child or a GenZer is going on TikTok (35:223).
One interviewee mentions that they worry about the kind of content their young cousin is consuming on TikTok. For instance, interviewees state that some soundtracks have explicit lyrics and are not tailored for children, classifying this as a drawback from all the positive features that the platform presents. For example, . . . it was quite a song that had explicit phrases and not really tailored for kids, it had bad words and stuff. I was surprised that she knew the song. And I want to ask her something like, “so how do you know the song?.” And she told me about it and was like “Oh yeah I saw it on TikTok” and there’s this challenge about it. Yeah. That was the main kind of like other kind of drawback, but I would say for TikTok’s background music (6:34).
For this reason, interviewees think that TikTok should take responsibility for improving the platform to eliminate safety issues for minors.
The algorithm is perceived as very efficient in targeting users which raises concerns regarding the extent to which their data and private preferences are respected. Interviewees are aware that privacy is the major ethical challenge for TikTok: “I often see things on the news that the privacy on TikTok is not really good. . .” (Interview 7:21). Specifically, interviewees are afraid that data is collected or shared without the user’s permission. In addition, interviewees wonder whether private data may be vulnerable because TikTok is owned by a Chinese company: I think something on that question is that TikTok is the most important thing facing the western world as it is an app from China. So, I do hear a lot of like arguing about the privacy thing, yeah. . . (5:23)
Irrespective of the degree of security ensured while collecting and storing data, privacy issues may still exist when the recommendation system makes inferences about a user based on their data. Interviewees argue that users may not be mindful of the nature of such inferences, and they may dislike a particular use of their data if they were informed earlier about it (Milano et al., 2020). In this way, they wonder if user data are used in ways that are harmful to their individual autonomy (Magalhães, 2018).
Addressing the Challenges Posed by TikTok
The first research question aimed at understanding how interviewees perceive the ethical responsibility of TikTok. Across 40 interviews, users highlight TikTok’s positive, inclusive content for the individual and society as a principle of “doing good.” However, interviewees voice their concern for how TikTok operates to achieve such targeted diversity of content. Safety and privacy are the most outstanding ethical concerns observed by interviewees. The main argument associated with privacy is the risk of the unauthorized use of personal data with the intention of harming users.
Hence, according to interviewees, privacy concerns are best explained in terms of risk exposure. User autonomy is also linked to the principle of doing “no harm.” The algorithm is perceived as harmful because it attempts to manipulate and drive users toward specific videos to maximize diversification of the algorithmic outcome, often causing compulsiveness. Questionable algorithmic recommendations are thus perceived as harmful, such as when users are subjected to unfair targeting or the use of manipulative techniques without their explicit consent.
The concerns raised by those interviewed regarding TikTok’s unethical practices highlight the need to address the absence of algorithmic pluralism and the risks that come with that. Algorithmic systems, such as those used by TikTok, have the power to shape user behavior, and influence their perceptions and attitudes. As such, it is crucial to consider the ethical implications of these systems and to design them in a way that prioritizes user well-being over compulsivity. Responsible design principles (Helberger, Pierson, & Poell, 2018) can help mitigate the risks of harm and ensure that algorithmic systems are developed and deployed in an ethical and socially responsible manner.
Such principles could include transparency, accountability, and user control as well as considerations for individual differences and social factors. Scholars have called for the incorporation of responsible design principles in the gaming and social media industries, including the development of ethical guidelines for developers and policymakers (Bhargava & Velasquez, 2021; Cemiloglu et al., 2022). By incorporating such principles, we can help ensure that algorithmic systems, including those used by TikTok, are designed and used in a way that benefits individuals and society as a whole.
Interviewees feel pressure to align with the public discourse around TikTok due to its repeated ethical missteps reported in the media. They rely on inductive moral reasoning, making ethical judgments based on their perception of how it will affect them rather than abstract ethical principles (Rest et al., 2000, p. 384). This point is exemplified by the ways interviewees refer to ethical responsibility. Instead of reasoning in terms of “right” or “wrong,” interviewees make ethical judgments in the form of “this will be good for me” or “this will be bad for me.” Despite recognizing TikTok’s ethical problems, interviewees continue to use the platform because it offers enjoyment and gratification through a paradoxical engagement with the algorithm that simultaneously supports inclusivity and reduces the social complexity of diversity and pluralism (Simpson & Semaan, 2021).
This paradox may be partly explained by the fact that interviewees report their rationalizations in a performative manner to avoid feelings of dissonance with the criticism of this platform. Two critical dimensions of ethical responsibility emerge from this observation. First, interviewees identify pluralism in their own actions, whereby they justify responsibility through an infantilization of their moral agency. Interestingly, this line of classification is typical of judgments made by individualistic persons or, in other words, persons with ethics not marked by pluralism but by a narcissistic regression. Second, algorithmic pluralism should lead to a more group-oriented mindset which can encourage a more collective and socially focused approach to ethical decision-making as opposed to an individualistic ethical orientation. To adequately address pluralism in everyday life from an ethical perspective, it is necessary to balance the satisfaction of individual desires with the complexities of daily living.
The second research question focuses on how interviewees negotiate differences between similarity and diversity, namely, how users understand the semiotic tension between (a) how the platform’s algorithm feeds users similar videos tailored to the user’s interests which they highly appreciate and, inversely, (b) how the recommendation of similar videos might limit the diversity of content to which the user is exposed. Although videos recommended by TikTok might appear to satisfy computational criteria of diversity, the algorithmic outcome masks the absence of true pluralism. The algorithm generates socially desirable videos to allow users to feel comfortable in their ingroup. In other words, recommended videos perpetuate a digital form of conformism in a conscious attempt to create the illusion of a more plural community. Paradoxically, TikTok is perceived to be efficient in promoting an apparent perception of inclusivity, while deliberately erasing alterity and promoting a universal sameness.
It has been argued that algorithmic recommendation systems can contribute to diversity (Helberger, Karppinen, & D’Acunto, 2018). Studies that assess recommendation systems, drawing from the knowledge extracted from the domains of computer science and psychology, show that diversity in recommendations boosts user satisfaction (Knijnenburg et al., 2012). Therefore, according to this body of literature, diversity in recommendations has a distinctive purpose in improving users’ experience (Willemsen et al., 2016). However, this diversity is not affirmative: It is not aimed at facilitating the coexistence of diverse interests, opinions, and lifestyles. TikTok’s recommendation system virtually eliminates direct personal and social interaction, shifting the agency from the user to the algorithm. It then concocts simulated diversity that can be consumed in place of social diversity. In doing so, it replaces each person’s singular, fundamentally unique substance with endless reflections of its own singularity. Thus, it generates a diffuse, narcissistic sense of reality.
When discussing their experience of TikTok, interviewees tend to envision an idealistic virtual community characterized by acceptance and inclusivity, in contrast to the realistic complexities and challenges of a pluralistic society. The individualistic drive of the interviewees is exhibited through their desire for comfort; for example, this is visible when they describe that they do not feel the need to adhere to predetermined ideals, such as body type or sexual orientation, in comparison to other platforms (Facebook, Instagram). It is for this personal feeling of comfort that they justify their use of the platform. TikTok promotes the impression of an inclusive space in which the semiotic space of “diversity” only apparently includes “strangeness” (Chouliaraki, 2011; Chouliaraki & Orgad, 2011). Recommended videos thus incentivize a digital form of conformism in a conscious attempt to create the illusion of a more pluralistic community.
From the ethical perspective, this last point is central because it highlights the potential harm associated with the replacement of pluralism with a simulated diversity that generates a universal sameness. The consequence of this phenomenon is undermining the value of human diversity and uniqueness, two important ethical principles. This semio-ethic distinction is particularly relevant to the perception of pluralism shaped by algorithms, and the extent to which such technology is integrated into society (Silverstone & Haddon, 1996). Therefore, future research should prioritize the algorithmic disappearance of the other, its relationship with the mediated self, and the ethical implications of this disappearance. This approach will help ensure that the development and use of algorithmic systems are both ethical and responsible, benefiting both individuals and society as a whole.
In conclusion, this study has an exploratory objective, but it is important to examine algorithmic pluralism from a semio-ethic perspective for both normative and moral reasons. The adoption of semio-ethics can contribute to the improvement of users’ critical thinking abilities, providing tools for informed decision-making, and thereby enabling public discussion about the appropriate functions of algorithms. These questions do not only stem from cultural and social criticism of algorithms. They are also relevant to academic debate on digital media ethics (Arora, 2020; Ess, 2020). This dual approach, based on a semiotic approach to ethics, is capable of considering not only ethical issues related to algorithms, data and standards, but also how social and cultural aspects are mediated by users. An empirical investigation into individuals’ perspectives on algorithms, including their thoughts and feelings, may stimulate a public discourse on the ethical implications of algorithms in shaping our society.
Footnotes
Acknowledgements
This article was written during my lectureship at Erasmus University of Rotterdam. I am deeply appreciative of Prof. Erik Hitters, whose invaluable guidance and support have been paramount in this journey. In addition, I extend my heartfelt thanks to the editor and the anonymous reviewers. Their critical insights and constructive suggestions played a significant role in refining this work.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
