Abstract
Political and moral/religious contents are increasingly popular on TikTok, and the concerns associated with them create the premises for a re-exploration of the user–machine agency negotiation. Using algorithmic awareness as a process, this research examines the relationship between users’ awareness of the TikTok algorithm and the main concerns associated with content that conveys political or moral/religious tenets. A survey of 329 Romanian students showed that greater algorithm awareness influences positive attitudes toward algorithms, but significantly stronger positive effects are observed between awareness and the two mediators related to political and moral/religious content perceived as contentious. Using Foucauldian insights on productive resistance, I argue that in-depth knowledge about the functionality of algorithms empower users to identify and subvert different forms of power, algorithmically mediated through political or religious content. When users perceive that they have enhanced agency over what they watch on TikTok, they feel that they can control potential concerns and consequently adopt positive attitudes toward algorithms and the overall platform. Foucault discusses pastoral power as a subtle form of power, designed to empty individuals of their deepest secrets. Similarly, such power is increasingly algorithmically mediated, given that digital machines enhance their agency in often nontransparent ways. Therefore, users’ awareness regarding the functionalities of algorithms allow them to combat the various mutations specific to pastoral power while encouraging them to adopt more positive attitudes toward algorithms in general.
Keywords
Introduction
TikTok has one of the best performing algorithmic recommendations of all digital platforms, and this is reflected in personalized media feeds that are curated based on users’ online behavior. Using natural language processing, TikTok has the ability to intuit and store the visual and auditory elements that users enjoy (Kang and Lou, 2022), managing to anticipate a user's fears and preferences in less than 40 min (Lovejoy, 2022).
In this context, algorithm awareness has been increasingly discussed as a skill that is necessary for understanding users’ complex media environment (Dogruel et al., 2021; Gran et al., 2021). Other studies have extended the discussion about algorithmic awareness so that it refers to the parents of teenagers (Taylor and Brisini, 2024), along with the role that perceived contentiousness has in shaping algorithmic attitudes on TikTok.
While most studies on TikTok have focused on countries such as the USA or China, few studies are available for other countries with regard to outcomes of algorithmic awareness. In Romania, more than a third of the country's total population is on TikTok, that is, 6.5 million users as of 2022 (Neagu, 2022). With such a substantial increase in the number of users, it is no coincidence that the activity of institutional agents has moved to TikTok. In the social sciences, politicians, priests, teachers, doctors, and so on are considered organizational or institutional agents because they all have a concrete role in the socialization process; therefore, they transmit values, ideas, and symbols to other social actors (Berger and Luckmann, 1967).
However, these representatives of social institutions such as politics or religion have adapted very quickly to algorithmically mediated realities, and the forms of power they exercise contribute to a good extent to the propagation of hatred and manipulation in the digital environment (Obreja, 2024; Rughiniș et al., 2024). Given the coexistence of many subtle forms of power that are increasingly present in the digital space, there is little discussion of the ways in which users actively resist this content and the algorithmic mechanisms by which content is recommended. The relationship with the institutional contents not only raises different problems related to normativity (Cotoi, 2011) but also puts some users in the position of targeting and subverting potential manifestations of power perceived as repressive. Using both Foucauldian insights on resistance (Foucault, 2000, 2007) and more recent perspectives that discuss algorithmic awareness as process (Siles et al., 2020; Siles et al., 2022), I argue that the ways in which users understand the functionalities recommendation algorithms on TikTok encourage the detection and labeling of certain political or religious content as extremist. This is because understanding the role of content curation algorithms encourages users to express their needs online and at the same time to campaign against perceived power asymmetries on social media (Kang and Lou, 2022).
Literature review
Using knowledge to resist: A Foucauldian perspective
As it is known from the variety of studies related to social media algorithms, the role of user resistance is rather neglected, as such studies mainly discuss the dominance of algorithms in a surveillance mode (Zuboff, 2015), as a form of “soft biopolitics” (Cheney-Lippold, 2011) or as specific technological instances of a black box society (Pasquale, 2015). Although Foucault's perspective focused mainly on power and the more or less subtle ways in which it is applied, toward the end of his academic activity, he changed his focus from power to subject (Foucault, 2000). This approach proves productive to investigate how a subject could actively resist the governing principles that are exercised in most spheres of social life (Foucault, 2007).
It is important to point out that although Foucault (1995) initially characterized power as productive, he gradually highlights a similar productivity within the resistance exercised by human subjects as a way to outline an “antagonism of strategies” (Foucault, 2000: 329). By understanding and making sense of the digital affordances present on social media, users gain an enhanced agency that helps them resist digital governance at all (Ettlinger, 2018). As I understand institutional activity on digital platforms, it becomes clear that institutional agents become relevant precisely because of their relationship to the algorithms that help them go viral, not necessarily through the institution they represent online. As I pointed out, when I talk about institutional agents or institutional content, I have in mind the sociological concept of a social institution, which designates a set of norms, rules, or values intended to regulate the behavior of individuals in a given society. Once such content depends on a few lines of code to be able to go viral to users, it becomes clear that the productive resistance's main aim “… is to attack not much such-or-such institution of power, or group, or elite, or class, but, rather, a technique, a form of power” (Foucault, 2000: 331, own emphasis).
Between agency and functionality: TikTok algorithmic awareness as process
Although algorithmically curated content is found on all digital platforms, the algorithms on TikTok are well known for being more powerful and intrusive with regard to generating the most desirable recommendations. When recommending videos to users, the TikTok algorithm uses natural language processing to identify both visual and auditory components that users prefer through active or passive reactions (Kang and Lou, 2022). Specifically, TikTok's algorithm is so powerful that it can predict users’ preferences, likes, and fears in less than 40 min of use (Lovejoy, 2022).
Algorithmic awareness has begun to be discussed as a relevant digital skill through which social media users make sense of data-driven digital technologies. This approach is important because it takes into account the heterogeneous and nonlinear realities through which individuals come to understand and communicate within platforms that exercise forms of algorithmic curation (Karizat et al., 2021). Algorithm awareness has developed even within TikTok to highlight the steps through which algorithms influence digital behaviors (Siles et al., 2022). The process of algorithmic awareness is dynamic and opaque because the criteria for personalizing content on social media evolve gradually, while the valences of this awareness can rarely be verified (Gran et al., 2021; Hargittai et al., 2020). Algorithmic awareness can therefore be defined as the amount of knowledge that users have in relation to how algorithms work, from the use of digital metrics in traces to recommend content to the technological, moral, and ethical implications of such an approach (Zarouali et al., 2021). Later, the acquisition of a certain level of awareness should lead to certain algorithmic attitudes and, consequently, to the shaping of some communication behaviors on that platform, a stage that refers to behavioral outcomes. What users know about algorithms (Cotter and Reisdorf, 2020) differs according to certain sociodemographic characteristics, such as age, sex, and education (Espinoza-Rojas et al., 2022; Gran et al., 2021).
The quality of personalizations a user receives in the news feed—that is, the quality of curation in representing users’ values and opinions—influences the adoption of certain attitudes regarding these recommendation algorithms. Thus, users form positive attitudes when algorithms prove their effectiveness in curating the multidimensional interests of these users, as part of complex place activity within media ecologies (Espinoza-Rojas et al., 2022; Siles et al., 2022). Conversely, algorithmic recommendations perceived as inappropriate or aggressive will attract negative attitudes toward these algorithms (Bucher, 2019; Swart, 2021). The relationship between algorithmic awareness and subsequent attitudes is already frequently discussed in the academic environment, but the utilization of important theoretical resources, such as algorithmic knowledge as a form of productive resistance (Ettlinger, 2018), is mostly neglected. That is why, based on the literature related to algorithmic knowledge as resistance, I observe that users tend to interpret the content rendered algorithmically through different episodes of exercising power in a Foucauldian sense, which creates the premises for contesting these subtle forms of power. Thus, users will probably adopt several algorithmic attitudes by being aware of different content recommendations that they evaluate as contentious or extremist.
While most forms of digital knowledge are rather translated through digital literacy (Cho et al., 2024; Manca et al., 2021), I argue that algorithmic awareness approach provides a more comprehensive understanding in relation to the governing potential exercised of these digital platforms. Digital literacy focuses mainly on users’ basic skills and adjacent digital tools that are present on social media, while algorithmic awareness mainly insists on the development of critical thinking regarding both visible and invisible aspects of affordances. This distinction is all the more important in the case of TikTok, given that it has often been rated as performing better in recommending content more tailored to users’ needs and values, compared with algorithms on Facebook and Instagram (Taylor and Choi, 2022).
Regardless of whether different algorithmic benchmarks are used, such as folk theories, imaginaries, or stories, all of these forms of conceptualization highlight different manifestations of algorithmic awareness (De Cicco et al., 2024; DeVito et al., 2018; Obreja, 2024; Siles et al., 2020; Schellewald, 2022; Ytre-Arne and Moe, 2021). For example, the affective dimension of users has become prevalent (Papacharissi, 2014); thus, digital communication scholars have discussed a plethora of feelings among social media users: surprise (Swart, 2021), oscillation (Siles et al., 2022), and appreciation or guidance (Obreja, 2024).
In this context, a rise of machine agency is discussed in the context of TikTok, given the fact that its AI affordances manage to personalize user content in the absence of an active input. Such an enhanced machine agency proves to be problematic, given the fact that it is often perceived as a threat among users (Jia et al., 2012; Kang and Kim, 2020).
The discussion of algorithmic awareness is particularly important because it highlights certain benchmarks related to agency: the ways in which users are aware of and label the algorithm on TikTok contribute to the emergence of “algorithmic sovereignty” (Reviglio and Agosti, 2020), according to which opaque algorithmic realities prevent users from actively participating in social media and identifying potential threats. Thus, when users believe that they have enhanced algorithmic awareness, they will be more confident about their needs (Sundar, 2020), allowing them to interpret more comprehensively the nature of the recommended content (Dietvorst et al., 2018; Schellewald, 2022). Next, I explain the interference of social institutions on TikTok through the activities of “traditional” organizational agents, such as political and religious/moral ones (Obreja, 2024).
Controversial political and moral/religious content at the confluence between algorithmic awareness and attitudes
When social and institutional realities are mediated through some recommendation algorithms, it is observed that the very exercise of authority becomes a form of alienation (Hallinan and Striphas, 2016). This alienation reflects the emerging transfer of authority from “traditional” institutional representatives such as priests, politicians, or teachers, to an automated authority exercised through machinic calculation. But this does not mean that these institutional agents are becoming obsolete. Rather, with the assimilation of different levels of algorithmic awareness, both those who create contents and those who watch them depend more and more on this knowledge capital. However, the emergence of this capital is done in very heterogeneous ways, given that algorithmic power affects users in very different ways. Cooper (2020) explains how the algorithms of digital platforms gradually take over various techniques specific to pastoral power in order to shape servile and obedient subjects that depend on an authority that claims to be ubiquitous and omnipresent. Thus, “algorithmic governance destabilizes the liberty and autonomy of the human subject, thereby bearing the signature of its pastoral archē” (Cooper, 2020: 40, emphasis in original).
Pastoral power becomes effective over human subjects only when they end up “revealing their innermost secrets” (Foucault, 2000: 332). However, the full knowledge of these subjects is put in difficulty when users develop ways of resistance to algorithmic power, contributing to the formation of counter-conduct. As I argue in the context of algorithmic awareness as process, such productive counter-conducts not only offer users new tools to resist against algorithmic domination but also create the premises for the adoption of positive attitudes toward algorithms overall. This happens because users with increased algorithmic knowledge develop their agency in the negotiation process between human and machine, managing at the same time to challenge the political and religious contents that they consider to be contentious. Such a mechanism reflects the fact that users understand and actively oppose nontransparent forms of increasing machine agency and decreasing user agency (Kang and Lou, 2022). Such a patchy process of agency negotiation takes place even if the algorithms themselves are objective (Beer, 2022), given the fact that the mechanisms by which they are deployed make users come into contact with content that they would normally consider controversial or to be avoided (Ettlinger, 2018).
In an increasingly digitally mediated world, face-to-face communication is gradually diminishing to the detriment of communication on digital platforms such as Facebook, Instagram, and TikTok. Therefore, the socialization process has also changed so that the most relevant information is the information that succeeds in becoming viral, “tricking” the algorithms of these platforms (Couldry and Hepp, 2017).
In the social sciences, institutions play an active role as they shape the processes of knowledge by transmitting different values, principles, and symbols. Thus, it is possible to evaluate the activity of institutional agents active on TikTok, such as politicians, priests or doctors, given that algorithms highlight the content of a political, moral, or religious nature as examples of digitally mediated socialization (Obreja, 2024). While identity markers on TikTok and social media at large are mainly discussed through reference to political content (Gonyea and Hudson, 2020; Highfield and Leaver, 2016), the institutional relevance of these types of content is little debated in the digital communication framework. By institutional relevance, I mean the character of some social institutions, such as politics, religion, and education, given that they are responsible for shaping values, ideas, and relationships between individuals. Thus, institutional agents such as politicians, priests, doctors, or teachers have an increasingly dynamic and creative role on digital platforms, either through memes (Hosszu et al., 2022; Rughiniş and Flaherty, 2022; Zeng and Abidin, 2021), short and creative videos prepared manually, or artificial intelligence (Bran et al., 2023). Additionally, institutional agents active on social media play an important role in shaping user identity, as digital platforms such as TikTok create the impression of direct interaction. This impression of direct interaction is also encouraged by certain digital affordances specific to TikTok, such as the duet feature, which allows users to append a new video around the original video (Zeng et al., 2021). Therefore, all these affordances, together with the increasingly viral activity of political and religious/moral agents, lead to an increase in user awareness with regard to sociopolitical issues (Hautea et al., 2021) or even opinion expression online (Oz et al., 2024).
Therefore, a rising awareness of the nature of political content on TikTok has been frequently associated with a variety of ideological interests of these institutional agents. It is also noted that expectations regarding the “intentions” of the platform change as users use it. As Herrman (2020) shows, nonusers believe that “TikTok is an engine for progressive young politics,” but as this platform is used increasingly more, it is easy to see that TikTok effectively responds to expectations and user preferences. The performance of the TikTok algorithm is also assessed through passive use, for example, by the duration of viewing some content without clicking on likes or shares. Through advanced systems of personalized recommendations, it is certain that AI-based algorithms influence our daily visualized content on social media to a good extent (Bucher, 2018; Swart, 2021).
In addition to providing attractive political content to support TikTok's recommendation algorithms, religious content has also reached unprecedented popularity. For example, in their discussion of the da'wah movement on TikTok, Maghfirah et al. (2021) showed that the religious content of this movement has the advantage of being accessible and easy to understand, facilitating its propagation on TikTok. The sociopolitical consequences of such content are obvious. By analyzing more than a thousand pieces of viral content related to abortion on TikTok, Wu and Byler (2022) showed how pro-abortion content received more views than did anti-abortion content, and the design of TikTok's algorithms managed to effectively polarize such topics.
Given that these contents on social or political themes frequently contribute to the polarization of users, many of them are aware, at least partially, of the role that algorithms play in this process. Therefore, I will discuss such political/religious content on TikTok as a mediator between algorithmic awareness and algorithmic attitude. Therefore, I consider the following research questions:
Method
Participants and procedures
Students from two universities in Romania who are active users of TikTok (using the app at least once a month) were recruited for this study based on their availability. Participants were invited both through me and through two colleagues who sent them the QR code of the questionnaire. Among the students who expressed the intention to participate in the study, 516 met the screening criteria, and 329 completed the final survey. Additionally, students were encouraged to share the questionnaire with their peers who use TikTok, but the number of completions from these categories remained quite low (n = 26). Participants provided their prior consent for data processing and were assured of the confidentiality of the data. This research received approval from the Academic Research Ethics Committee of my university. The sociodemographic characteristics of the participants are shown in Table 1. It is important to state that students from both universities are enrolled in sociology and/or human resources departments, therefore having certain rather liberal profiles and stimulating a raised awareness of potentially extremist or propagandistic content on TikTok. Therefore, the more liberal orientation of the students enrolled in this study affects to some extent the generalizability of these results, especially in a predominantly conservative population that may have distinct levels of awareness toward algorithms.
Participant sociodemographics.
The cultural dimension of this sample may be of particular relevance. Given the communist regime that existed in Romania until 1989, it is certain that some social institutions, such as religion and politics, are of increased importance. In a cultural space where religious or spiritual manifestations were formally prohibited and the only accepted political adherence was that toward the single party, the relationship of individuals with these two institutions deserves close investigation, even in the current context of algorithmically mediated interactions.
Measures
Following a previous approach investigating TikTok algorithm awareness as a process among parents of teens (Taylor and Brisini, 2024), I also introduced a qualitative component designed to help respondents highlight their awareness levels. Specifically, the following open-ended question was added: TikTok is a popular social media platform for young people. This platform uses a feature called For You. As much as possible, please explain how you think TikTok and the related for you streamwork. If you do not know, write it below.
This open-ended question constitutes a preliminary benchmark in this analysis of algorithmic awareness; thus, in this stage, I investigated respondents’ answers in a qualitative manner using specific principles of abductive research (Timmermans and Tavory, 2012). Specifically, I analyzed respondents’ answers according to previous studies that referred to the TikTok algorithm; however, I also paid attention to potential surprising evidence. Thus, several open codes (Elo and Kyngäs, 2008) related to how the algorithm works were grouped into broader dimensions, the main such dimensions being audio–video processing, active behavior monitoring, and passive behavior monitoring. All of these broader themes also reflect the diverse processes of agency negotiation between humans and machines on TikTok.
Algorithm Awareness. Starting from the assumption that algorithmic awareness works as a process in which users learn new intensities of awareness as they use TikTok (Siles et al., 2022), I used items from the algorithmic media content awareness scale (Zarouali et al., 2021). These items were reported on a 5-point scale, from 1 “not at all aware” to 5 “completely aware.” Among others, the algorithm awareness scale includes items related to content filtering awareness, e.g., “Algorithms are used to recommend videos on TikTok,” “Automation “Algorithms are used to show me videos on TikTok based on automated decisions,” or human-data interplay, for example, “The videos that algorithms recommend me on TikTok depend on my online behavior on that platform,” as well as ethical challenges, for example, “The videos that algorithms recommend to me on TikTok are subjected to human biases, as such prejudices and stereotypes” (Zarouali et al., 2021). In total, the 13 statements were summarized into an index. I also tested the internal reliability of the items using Cronbach's alpha and obtained a desirable value of α = .757.
Algorithm Attitude. To measure users’ attitudes toward the TikTok algorithm, I used a preestablished index composed of nine items that measure attitudes toward social media algorithms (Silva et al., 2022), ranging from 1 “strongly disagree” to 7 “strongly agree.” Given that some items assign positive values to the algorithm (e.g., efficient, correct, necessary) and that other items assign negative values (e.g., biased, invasive, manipulative), I had to recode items in the latter category before building the relative index of TikTok algorithmic attitudes to preserve the direct scaling principle. The internal reliability analysis revealed an acceptable Cronbach's α = .682.
TikTok Concerns Political and Moral/Religious Content. For these mediating variables, users were asked about their main worries associated with the political and religious content that goes viral on TikTok. These concerns associated with political and moral/religious content on TikTok were derived from previous qualitative-exploratory studies that investigated institutional landmarks on TikTok (see Kamran, 2023; Obreja, 2024; Scalvini, 2023; Siles et al., 2020). Thus, with regard to political content, four items were introduced referring to users’ related worries: (a) spreading misinformation and fake news, (b) shaping conspiratorial communities, (c) potential extremists, and (d) age-inappropriateness political values, such as endorsing deportations or hatred based on criteria, sexual orientation, or race. All four items revealed good internal reliability (Cronbach's α = .881). The concerning factors related to religious/moral content on TikTok include three items: (a) petty interest at the expense of moral/ethical interest, (b) extremism and fanaticism, and (c) discrimination based on religious criteria (Cronbach's α = .893). The contentious content perceptions were measured using a scale from 1 “not at all worried” to 7 “very worried.”
Covariates. I included a series of covariates in the regression model, such as sex, ethnicity, income, education, and frequency of TikTok use, from 1 “once a month or less” to 5 “daily or almost daily” areas. Table 1 shows the distribution of participants according to these variables.
Results
Preliminary qualitative analysis
Of the 310 responses received to the open question regarding users’ algorithmic awareness of for you feed, 45 (15%) indicated a lack of algorithmic awareness because they did not know how this feed works. Most of the answers fell into the enhanced type of awareness category (n = 157, ∼51%), indicating that the TikTok algorithm provides content on For You based on active inputs from users, such as likes, comments, shares, or some previously mentioned interests. Subsequently, 50 responses fall into the partial awareness category (∼16%), indicating that for you, they work on the basis of passive inputs from users, namely, the duration of viewing a certain content or through the complex processing of sounds and images that a user prefers. Another 40 responses (13%) fall into an area of limited awareness: users are aware that there is an algorithm or certain criterion according to which the contents on their feeds are displayed, but they do not mention any agency on the part of the user in the subsequent rendering of this content on For You. The least mentioned awareness categories are viral-oriented ones, where 13 comments (4%) fall and where users indicate that they are recommended the most viral content on TikTok or that contains the most accessed hashtags at a certain moment. Finally, four comments (1%) mention the mindreader or oracle feature of TikTok's algorithms because they manage to guess users’ preferences without any prior input from them. This is why these users attribute powerful mindreading skills to For You feed. Table 2 provides a relevant description of each theme, along with concrete examples of the answers given to the open-ended questions.
Users’ types of awareness of For You feed.
Predicting perceived political/religious extremism on TikTok
RQ2 investigates whether there is any association between users’ algorithmic awareness and (a) TikTok political content perceived as contentious/ controversial and (b) TikTok religious/moral content perceived as concerning. Figure 1 shows the unstandardized regression coefficients in the path analysis performed using IBM AMOS, and Table 3 shows the coefficients for the main covariates for the three endogenous variables included in the model. The results obtained show that users’ algorithmic awareness of TikTok positively predicts both political and moral/religious content perceived as contentious. Therefore, being aware of the main functions of the algorithms on TikTok also implies an enhanced awareness of the main purposes “behind” the contents of a political or religious nature. More precisely, an unstandardized coefficient of B = 0.711 (p < .001) was used to predict worries related to political content, and a coefficient of B = 0.681 (p < .001) was used to predict worries related to religious/moral content on TikTok. In addition, the frequency of TikTok use was negatively associated with both mediators, but only the association with political content was statistically significant (B = –0.106; p = .045 for political content; B = –0.139; p = .185 for moral/religious content). See Tables 3 and 4 below for the main correlations and effects between the variables included in the model.

Path analysis model for predicting algorithmic attitudes on TikTok.
Descriptive statistics and Pearson correlations for the variables included in the model.
Note: N = 329.
*p < .05; **p <.01; ***p < .001.
Regression model with covariates and algorithmic awareness for the three endogenous variables.
Note: N = 329.
*p < .05; **p < .01; ***p < .001.
Predicting algorithm attitudes via awareness and institutional content
Through RQ3, I wanted to examine the relationship between algorithmic awareness and algorithmic attitude, both directly and indirectly through political and moral/religious content. Figure 1 shows some small but significant effects for algorithmic attitude, an effect that is negative with regard to worries about political content (B = –0.170; p < .001) and positive but very small with regard to worries about moral/religious content (B = 0.064; p = .042). The negative relationship between concerns about political content and algorithmic attitudes coherently reflects how the awareness of potential political concerns on TikTok—extremism, radicalization, content inappropriateness, and so on—prevents, to some extent, the adoption of positive attitudes about the algorithm TikTok.
Although the relationship is positive with regard to moral/religious content on TikTok, it is quite weak. Even after the introduction of these two mediators, the relationship between algorithmic awareness and algorithmic attitude remained positive (B = 0.150; p = .048), 1 which confirms previous insights into how a better understanding of algorithmic functionalities influences the adoption of more positive attitudes toward social media algorithms (Dietvorst et al., 2018; Sundar, 2020).
Discussion
In this study, I used algorithmic awareness as a process to investigate what role social institutions (mainly politics and religion/morality through their agents) play in the relationship between algorithmic awareness and attitudes among Romanian students. As seen, increased algorithmic awareness strongly influences subsequent concerns regarding the “adverse effects” of political and religious content on TikTok, which highlights that Romanian users in this sample (mainly college students) are aware that algorithmic realities contribute in a good way to measuring the nature of the content viewed on TikTok. Given that algorithmic awareness is a digital skill (Taylor and Brisini, 2024), the purpose of this research is to determine whether such a skill influences the perception of the numerous contents of an extremist or propagandistic nature on TikTok, an assumption that has proven true both for political content and for religious/moral content.
Algorithmic awareness as a process suggests that once users become aware of algorithmic existence, they will develop attitudes that significantly impact their daily media use (Siles et al., 2022). Previous studies have shown that a more thorough understanding of algorithmic functionalities influences the adoption of positive attitudes about algorithms (Espinoza-Rojas et al., 2022; Gran et al., 2021; Kang and Lou, 2022), and this small and positive effect is also confirmed in this study. Living in a world characterized by algorithmic governance (Cotter and Reisdorf, 2020) involves a variety of interactions with these algorithms, and these interactions shape users’ institutional awareness of TikTok by helping them perceive potential existing concerns.
Controversial/contentious institutional content as outcome for algorithmic awareness
While institutional trust influences online behaviors of endorsing online information that might not be trustworthy (Van Zoonen et al., 2024), it is observed that an increased familiarity with algorithmic functionalities outlines opposite results: users immediately identify and label online content that is misleading. All these results confirm the need to discuss algorithmic awareness, given that this skill familiarizes the user with a variety of concerns in the online environment, which ultimately contributes to the adoption of particular attitudes regarding the algorithm.
Unlike the study of Siles et al. (2022), Romanian TikTok students in my sample are aware of the existence of algorithms, frequently mentioning them in open-ended questions. This evolution over time not only indicates an increasing familiarity with algorithmic functionalities on social networking sites but also legitimizes, to a good extent, the development of an awareness of the contents propagated by institutional agents who have a variety of interests in the online environment (Obreja, 2024). In my study, I identified strong and positive effects of both political and moral/religious concerns on TikTok via algorithmic awareness, which has serious implications for sociology as well as for communication. First, I observed how algorithmic literacy works as a process whereby certain categories vulnerable to extremist or misleading content on TikTok gain resistance. Such resistance is not achieved out of inertia but actually reflects various types of human–machine agency negotiation. Thus, the detection and awareness of potentially extremist content on social media shows that users are frequently aware of their subjection, which is why they actively fight against these forms of algorithmically mediated power. However, for the resistance to these contents to be a productive one, it is essential that users are aware of the different functionalities specific to the algorithms, given the fact that the algorithms are responsible for displaying these contents.
The viral nature of the content propagated by institutional agents such as politicians and priests confirm the implications of the qualitative research of Scalvini (2023), according to which different content producers take advantage of certain controversial social issues to gain popularity on TikTok. Therefore, “it is suggested that TikTok could make its method of generating personalized recommendations transparent to reduce the threat of violating their autonomy by providing them with details as to why TikTok recommends certain videos” (Scalvini, 2023: 8). Various social issues such as racism, sexism, and homophobia are capitalized upon by both political and moral/religious agents, but the performance of the TikTok algorithm outlines a paradoxical space (Simpson and Semaan, 2021) in which diversity and inclusiveness are promoted but seem to violate and undermine users' identities. Therefore, algorithmic awareness must be investigated as a complex process through which users begin to understand the potential concerns that appear based on content recommendations.
Institutional mediation and attitudes toward algorithms
The positive relationship between algorithmic awareness and algorithmic attitudes confirms that participants’ relationship with TikTok changes as they become familiar with the application. This is confirmed even in Table 4 by the positive relationship between the frequency of TikTok use and the adoption of favorable attitudes toward the TikTok algorithm. Therefore, the gradual use of the application creates the premises of a personalized relationship (Siles et al., 2022) in which the TikTok algorithm anticipates users’ main interests.
Additionally, users’ concerns about political content on TikTok prove to be a reliable predictor of algorithmic attitudes. This negative relationship between the two shows how the perception of extremist political content on TikTok prevents users from adopting positive attitudes with regard to algorithms. Therefore, future studies could examine the character of these political contents in more detail through digital ethnography, interviews with users, or even extensive computational procedures such as sentiment analysis. The vaguely positive relationship between algorithmic attitudes and the worries associated with moral/religious content can be attributed to the difficulties in identifying such recurring content on TikTok since the goals of content propagating moral/religious cues are harder to identify than in the case of political ones. Also, this weak positive relationship legitimizes the existence of a variety of subjectivities on social media. Such subjectivities can identify potentially extremist content, but do not necessarily show active resistance to it. As Ettlinger (2018: 4) notes “Some digital subjects, for example, may be less critically aware of their situation in digital governance while signing on to resistance efforts by becoming relatively passive members of groups or making use of the benefits of resistance strategies forged by others.” Thus, an active resistance also requires continuous self-discipline (Foucault, 2000), through which users understand and critically treat the forms of power exercised through reels and viral videos. Also, the positive but bleak relationship between the two variables could indicate, to some extent, the censorship addressed to forms of religious or spiritual manifestation during the communist regime in Romania. Such censorship might contribute to the rather diffuse association, where although certain religious contents are evaluated as potentially concerning, they can contribute to the adoption of relatively positive attitudes toward algorithms.
As previously observed, when users materialize their will in relation to the contents available on TikTok, they understand that several contents perceived as aggressive or contentious are the product of an enhanced machine agency that must be properly understood (Kang and Lou, 2022). This happens because individuals are eager to be able to control the environment they are part of, and the daily experimentation of algorithms in certain contexts, evaluated as more or less concerning, gives users a “reason to react” (Bucher, 2019: 41). Consequently, users have the necessary incentives to form concrete attitudes toward algorithms that stimulate them to adopt different levels of agency.
Thus, the results obtained suggest that institutional agents play an increasingly visible role in the process of digitally mediated socialization and that the propagation of political or religious/moral content on TikTok cannot be successful without fulfilling their viral character (Couldry and Hepp, 2017).
Limitations
Naturally, my research has certain limitations. First, the sample size (n = 329) can prevent the identification of certain small effects, especially in relation to covariates, which are mostly nonsignificant. Also, political and religious worries could be investigated within more diversified samples, given the fact that these phenomena can differ considerably in the case of other cohorts or in the case of more heterogeneous levels of education. However, even if the data are collected cross-sectionally (mainly among Romanian university students and their peers), some important and predominantly statistically significant effects are identified—visible effects—especially through the exploratory inclusion of the two variables. Thus, the more liberal orientation of the students enrolled in this study might affect to some extent the generalizability of these results, especially in a predominantly conservative population that may have distinct levels of awareness toward algorithms. However, further research could longitudinally examine, through different samples, the extent to which the relationship between algorithmic awareness and attitudes changes, providing new empirical insights into algorithm awareness as a process. Such an approach is useful for investigating the extent to which the causal direction of the variables in the model holds.
Additionally, sociodemographic limitations are also present, given that the research sample was predominantly White and heterosexual and lacked a high monthly income, characteristics that correspond to a good extent with the sociodemographic structure of students in Romania. Thus, future studies could investigate more intersectionally diverse populations, given that users receive content that is personalized and tailored to their values and interests.
Conclusion
This research is part of the analysis sphere related to algorithm awareness as a process (Siles et al., 2022; Taylor and Brisini, 2024), investigating the role that potentially contentious political and moral/religious content has as an outcome of algorithmic awareness. In the context where the activity of algorithms on social media is often conceptualized as a more or less subtle form of power, users often end up contributing to a productive resistance in response to these algorithms. Measuring resistance involves a process in several stages, the first referring to the acquisition of algorithmic knowledge as a form of awareness (Step 1), followed by the evaluation of various institutional contents as extremist or contentious (Step 2) and, finally, adopting certain attitudes toward the algorithms themselves (Step 3). Thus, this research highlighted new implications of the use of algorithmic awareness on social media, offering at the same time a productive conceptualization of the Foucauldian idea of resistance against subtle forms of power that are increasingly algorithmically mediated. These empirical examples of identifying certain institutional contents and labeling them as contentious represent a conclusive example of productive resistance that, unlike passive forms of resistance through avoidance or obfuscation (Ettlinger, 2018), allow the subversion of oppressive forms of digitally mediated power manifestations.
Footnotes
Acknowledgments
For this research, I received very useful help both from Paula Tufiș, PhD, with whom I discussed the coherence of the path analysis model, and from Ștefania Matei, PhD, who repeatedly guided me with vital theoretical references in explaining the statistical relationships obtained. I am also grateful to the three anonymous reviewers, who provided me with very constructive suggestions to make this research more theoretically and empirically relevant.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
