Abstract
It is no secret in consumer marketing literature that people project personalities to the machines they interact with to elicit intimate emotions. Importantly, the fact that proprietary algorithms are ‘black box’ machines likely leaves emotional gut feelings to be the means to which to resort in evaluating algorithmic outputs. This study advances a thesis that emotion is an intuitive basis for guiding informational judgment—with algorithms exacerbating human susceptibility and possibly leaving people vulnerable to automated decisions and their potential bias. Three-related studies were conducted, using subsamples of U.S. population survey to investigate the dynamics of emotional correlates associated with people’s exposure to algorithm (Study 1 and Study 2) and its consequences (Study 3). Study 1 found the exposure to algorithm via personal data endorsement was significantly associated with emotional traits, with their divergent functions to people’s reception of algorithm. Study 2 replicated these findings in the contexts of (1) Facebook algorithm and (2) stand-alone AI (legal, financial, and employment decisions). Study 3 found the consequences of affirmative algorithmic reception in its contribution to the way people perceive the accuracy of information represented in algorithm-based social media. This study’s proposition is that in dealing with personal data demand, constant rewards of automatic gains of access, and use make digital consumption susceptible. Evidence suggests that cognitive burden may ease out, as emotion is encouraged to reign as a prime source of judgment discouraging a user to switch off, reject, or critically receive algorithm-mediated information.
Plain Language Summary
This study looks at the function of emotion in people’s use of various algorithmic environments. More specifically, this study’s primary question is how emotion is formed, with what consequences, in people’s exposure to various algorithm-based platforms. Three studies based on U.S. national surveys were conducted to answer the question. The lesson from this study’s analyses is that emotion might fill in the cognitive gap in the use of algorithms. Oftentimes, because people hardly have a rational way to strategize their behavior in the exposure to algorithms, they likely resort to their emotional gut feelings. In other words, emotion might remain as a clue to guiding people’s behavior or judgement, as they do not have clear rational guidelines in the use of algorithm-based platforms. This study is concerned, given that algorithms offer automated ‘sweet’ reward of convenience, for instance, exacerbating people’s reliance on emotional traits that may not be necessarily wise. This concern is particularly toward the unwarranted submission of personal data, with the data oftentimes functioning as an entry currency to the use of algorithm-based platforms.
Introduction
Algorithm surged into every aspect of digital consumption, as the use of social media, for instance, presupposes the exposure to algorithmic informational outputs based on personal data. Scholars rightly followed with abashing critiques about dominant forms of algorithm decisions, bias and their potentially oppressive effects (Cheney-Lippold, 2017; Crawford & Schultz, 2014; Hoffmann, 2019). Much of these works focused on the problems arising at structural and institutional levels, with the threat of unwarranted data surveillance as a dominant theme (Gandy & Nemorin, 2019). Other studies (Ananny & Crawford, 2018; Barocas & Selbst, 2016; Neuman, 2016) expressed a concern about unequal distribution of resources among different communities in dealing with possible harms and exclusions created by algorithms. The current study builds on these observations by exploring individuals’ vulnerabilities at the micro level by investigating emotional traits deriving from the exposure to algorithms.
The goal of this study is to build new conceptual understandings about emotion in response to emergent algorithm environments. To this end, this study newly contributes to the understandings of emotional response to AI by drawing on a stream of empirical research on emotions (Assunção et al., 2022; Burton et al., 2020; Li & Zhang, 2024; Tiedens & Linton, 2001; Zhao et al., 2022) and critical algorithm and data studies (Crawford & Schultz, 2014; Gandy & Nemorin, 2019). Each of these studies addresses respective sets of processes that explain the great variation of human susceptibility to automated environments where algorithm-based decisions are made based on personal data (Park, 2021; Park & Jones-Jang, 2023). Particularly, this study is guided by the notion of emotional appraisal (Marcus et al., 2000; Tiedens & Linton, 2001), which informs us of the potential ways in which affective evaluations, in relation to personal data, help people assess the dynamics of algorithm-based digital consumption. In this vein, the insight from critical algorithm studies (Cheney-Lippold, 2017; Gandy & Nemorin, 2019) delineate institutional condition that nudges users to rely on algorithm uncritically, as this study remains critical about algorithmized social conditions that exacerbate ordinary users’ vulnerability. Our analysis is also aided by behavioral insights on the degree to which individuals are enticed to privilege automatic emotional rewards.
The collective insight is that a human decision conducive of algorithmic environment might be much guided by emotional appraisal, just as in face-to-face interactions (Dwivedi et al., 2021; Marcus et al., 2000). That is to say, being feeble or emotionally susceptible to cognitive error may characterize user responses (and thus make them vulnerable) to personal data-driven algorithmic decisions (Assunção et al., 2022; Gandy & Nemorin, 2019). This study suspects that the way proprietary algorithms nudge data submission from users plays roles in activating certain sets of emotional traits. To this end, the current study adds the insight that people are not optimized to engage systematically with personal data management, instead being predisposed to bounded rationality in evaluating the complexities of algorithmically mediated information. Three-related studies are conducted to test these insights and examine the dynamics of emotional correlates, as the function of personal data submission, and their subsequent relationships to informational judgement.
Algorithmic Emotion and Data in Automated Environment
The opacity of commercial algorithms, along with their insatiable needs of data surveillance, resulted in complete masquerade of processes that affect people’s engagement with algorithmic outputs. Potential algorithm effect on people thus can be better understood by reverse-engineering algorithms and their informational outputs from the standpoint of users whose digital consumption is at the disposal of proprietary platforms (Barocas & Selbst, 2016; Crawford & Schultz, 2014). Little known, however, is the ways in which people come to make decisions regarding algorithms, facing an influx of automated information outputs in split seconds. One of the central propositions by this study is that the opaque nature of algorithm renders no rational way for individuals to estimate situations systematically, opening the door for them to rely on their gut feelings or fleeting hunch, and vaguely guess by picking available emotional cues to navigate algorithm-based environments.
The roles of emotion in guiding human decision-making have been well documented. A vast array of social science studies investigated the function of emotions in numerous public issues. This study is fascinated particularly by a proposal by Marcus et al. (2000) of emotional appraisal, of which the conceptual applicability extends well beyond a narrow set of political issues and policy preferences that their work initially set out to tackle. By relying on neuroscience, what Marcus et al. (2000) proposed is that emotions occupy the prime position guiding informational judgment before conscious effortful decisions come into a play. At the heart of their proposition is bounded rationality—the limited utility of human cognitive capacity (Neuman, 2016; Simon, 1967), and from this, we can suspect that cognitive limit, newly exacerbated by algorithm and its opaqueness amid the recurrent exchange of personal data in digital consumption, might be aided by emotional appraisal.
This is like us looking at inside of built-in brain pathways hardwired to assess emotional reward occurring via data submission, with external reward stimulus coming in various forms of free access, contents, personal connections, as well as the convenience of automation. On the contrary, the taxing cognitive demand to go through deliberate data management—for instance, not to submit data against algorithmic demands from social media—will be overwhelming, if not punitively exhaustive. In other words, any willful effort on the part of an individual might generate negative emotional responses in lieu of ‘sweet’ rewards bestowed upon the person, as they likely forego cognitive burden associated with data management (Assunção et al., 2022; Gandy & Nemorin, 2019). The fundamental idea holds that human instinct for emotional appraisal—as deeply wired in animal survival—will help navigate the complex algorithm-based environments in the way emotion helps human decisions—as it is used to surveille threats, danger as well as comfort and safety in physical worlds (Marcus et al., 2000). One might correctly call this idea an affective version of bounded rationality, which posits that the limits of human decisions arise, not only from environmental constraints, but also from inherent natures of humans as information processors (Schwarz, 2012; Tiedens & Linton, 2001).
The current study attempts to elaborate critically on the empirical explanation about how emotion in its distinct dynamics instinctively supplements cognitive limits in response to personal data-based algorithm environments. In fact, puzzling information behaviors had been investigated mostly at the individual level, assuming the individuals as rational decision-makers who could be aloof from extraneous stimuli (Acquisti et al., 2016, for discussion). The contribution of this study is to propose another way to look at this critically, as the limited capacities of human brains may be aided, or exacerbated, by affective modes of decisions. In lack of direct knowledge about how and why particular algorithmic judgments have become made about us, the emotional toolbox of fast and frugal intuitions likely triumphs as the best available cognitive resource in guiding uncertain, unfamiliar or unknowable future events (Burton et al., 2020; Bucher, 2017; Dwivedi et al., 2021; Lee, 2018; Wang et al., 2024). Accordingly, this study sets out to propose and test the idea of algorithmic emotion. In defining this idea as sets of emotional appraisal processes that are led to evaluate, accept or reject informational output of algorithms, three qualifications are warranted.
The first qualification is about the role of emotion. This is not to disregard rationality, cognitive skills, or knowledge, which might guide individuals in processing algorithm-mediated information. Instead, the proposition is that emotional correlates stand out at the exposure of algorithms that privilege automatic swift judgment. The second qualification is the validity of emotional judgment. Emotional appraisal should not be seen as error-free, valid, or superior, although feelings may often turn out to be entirely correct. Instead, this study advances a thesis that emotion may be an intuitive basis for guiding informational judgment, as algorithm exacerbates human susceptibility, possibly leaving people vulnerable to automated decisions and their potential bias. The third qualification is the nature of emotional dynamics as process. In assessing surrounding environments, human brains rapidly use emotion to register positivity and negativity as the way to simplify and narrow available strategies (Muramatsu & Hanoch, 2005). This study’s primary interest is on this process that gets algorithmically stimulated by the need to assess informational uncertainties and eventually, bring mental orders to them.
Data marketers have long known that computers elicit intimate emotions as consumers often project relational personalities to the machines they interact with (Brandtzaeg et al., 2022; Moon, 2000). Behavioral economists often call this recursive exchange ‘lock-in’, a manipulative marketing technique luring consumers back to a service with an enticement (Gandy & Nemorin, 2019), such as Starbuck reward coupons via which a customer is repeatedly reminded of positive gains in exchange of their coffee consumption. Positive gains from algorithm, as this study argues, might well be perceived not just as tangible, but also as emotional or even relational one—such that in dealing with personal data demand from algorithms, the rewards of automatic gains of access and use will make digital consumption susceptible to ‘lock-in’. That is to say, cognitive burden might ease out, as emotion is encouraged to reign as a prime source of judgment discouraging a user to switch off, reject, or critically receive algorithmically mediated information.
From the standpoint of a user, however, uncertainties facing algorithms remain fundamentally similar to the dynamics of human-machine interaction (Jussupow et al., 2020; Nass & Moon, 2000; Reeves & Nass, 1996). First, algorithm such as the one built in social media platforms can be interpreted as the machine to which users often delegate their decisions in respect to particular functions, namely of personalized friend-follower suggestions or content recommendations. Second, personal data in the repeated interactions with a platform or its algorithmic features can also be seen as a token of positive relationship built over time (Glikson & Woolley, 2020; Jafar et al., 2024). In fact, these are hardly distinguishable from any human relationship-building. That is, when interpersonal investment (personal data) meets the return of positive rewarding experiences (automated recommendation), relational equity is being achieved to generate affirmative, or even optimistic, emotional projection (Afifi et al., 2016; Park, 2022).
Three Studies in Algorithmic Emotion
Analyses of three studies are based on U.S. survey collected by Pew Internet (2018). This publicly available dataset consists of several subsamples, each of which contains the subsets of questions related to (1) social media algorithms, (2) AI-automated programs, (3) specific social networking services, such as Facebook, and so on. For this study, these questions were reorganized into three sets of items investigating positive and negative emotional correlates associated with people’s exposure to algorithms (Study 1 and Study 2) and its consequences (Study 3). Given this purpose, hierarchical regressions were run, because they allow the inspection of posited relationships separately, while controlling socio-economic covariates. This analytical decision opting for simpler analyses—for instance, in lieu of a combined model, were also conceptually valid in that this study is not to propose a unified model (Tomarken et al., 2005).
Study 1
Study 1 tested the formation of emotional traits of positivity and negativity at the exposure to algorithms, which also likely relates to the variation of algorithmic reception—how one receives algorithm and its informational legitimacy (H1). As aforementioned, the dual modes of positive and negative emotions are hypothesized, with personal data exchange between users and algorithm registering the distinctive emotional traits.
Method and Measures
Study 1 analyzed the sample of U.S. social media users—those who used Facebook, Twitter, Snapchats, Instagram, or YouTube, of which the SNS are based on proprietary algorithms (n = 4,316). Demographic characteristics were not distant from the profiles of general population figures by the American Community Survey (ACS). The median age, measured on a 4-point scale, was between 50 and 64 (median = 3, SD = 0.96); 50.6% were females; 25.5% were nonwhites; average education level was associate’s degree (M = 4.25, SD = 1.50, on a 6-point scale); and median household income was between $60,000 and $75,000 (M = 5.84, SD = 2.37, on a 9-point scale). These demographics were included as covariates in analyses.
Algorithmic Exposure
In observing the variation to which a user was exposed to proprietary algorithms of social media, two types of measures were used. First, an index of the general algorithmic exposure was calculated by counting the number of social media platforms that each person used (M = 2.35, SD = 1.25). Here it is critical to note that users can hardly engage in social media meaningfully without getting algorithmically involved, and thus, this measure is called general, as it is only to observe the extent of one’s general engagement with algorithm-based social media settings. Second, an index of personal data endorsement was constructed by averaging responses to a battery of four-item question. These questions measured the extent to which one endorses, rated on 1 (not acceptable at all) to 4 (very acceptable), the use of personal data for personal recommendation in each of (1) advertisement, (2) political campaign, (3) friend-connections, and (4) social events. Conceptually, this was to capture the willingness of giving up data or letting personal data being exposed to algorithms in respective contexts (M = 2.41, SD = 0.74; Cronbach α = .808).
Emotional Trait
Accounting for emotions, a six-item battery question assessed traits. Each of six items asked respondents to rate the frequency on a scale of 1 (never) to 4 (frequently) of being in the state of positive or negative emotional experiences in response to their social media content—feeling angry, inspired, amused, depressed, connected, and lonely. A factor analysis was run to confirm the presence of two distinctive emotional correlates. Positive correlates were inspired, amused, and connected, while negative correlates included angry, depressed, and lonely, with an un-rotated solution explaining 40.61% of variance at an eigenvalue of 2.43. For each of positive and negative correlates, an index was created by averaging responses (positive: M = 2.95; SD = 0.64; Cronbach α = .678; negative: M = 2.70; SD = 0.63; Cronbach α = .731). Interestingly, two emotional correlates were correlated at r = .57, p < .01, indicating that they related to each other but remained in distinctive patterns.
Algorithmic Reception
At the stage of perceived reception, the three forms of beliefs about the legitimacy of algorithm were measured: (1) bias (how), (2) manipulation (what), and (3) entities (who). The belief about algorithmic bias was tapped in a binary item that asked respondents to indicate whether they believe that computer programs make decisions without human bias or will reflect the bias of people who designed them. The answer with ‘no human bias’ was coded as 1 so that one’s receptive belief gets a higher score (M = 0.38, SD = 0.48). For the belief about algorithmic manipulation, the three binary items (Yes/No), which asked respondents about whether they find it to be okay to be manipulated in social media use, were used—with an index created by averaging the responses (M = 0.15, SD = 0.28; Cronbach α = .658): (1) changing the look and feel of their site for some users but not others, (2) reminding some users on election day but not others, and (3) showing more of happy posts and fewer of sad posts for some but not others. The belief about algorithmic entities was measured by asking a degree of trust on which each respondent bestowed technology companies to do the right thing, with respondents rating their responses on a scale of 1 (hardly ever) to 4 (just about always) (M = 2.14, SD = 0.66).
Results
Hierarchical regressions tested the formation of algorithmic emotion (H1) in which algorithmic exposure related to the two distinctive routes of emotion, with their respective relationships to algorithm reception. Table 1 displays hierarchical regression coefficients, which support H1. As predicted, (1) a higher level of algorithmic exposure led to higher personal data endorsement (the left column), which also (2) showed its significant relationships to emotional traits (the right column), controlling a large set of socio-demographics. In explaining the variations of positive emotion, the two variables of algorithm exposure accounted for 18% as a block (R2), compared to .09% for negativity, and this variance was even larger than that of the entire block of socio-demographics (.05%).
Predictors of Algorithmic Emotional Trait.
Note. R2 for each block is accumulated. Hierarchical regression was used with socio-demographics as the first block and algorithm exposure as the second block.
p < .001.
Interestingly, personal data endorsement was related to both positive and negative emotional traits, as in the case of a higher level of the general exposure to algorithm—suggesting there is a reason to believe that personal data exchange may significantly associate with emotions that are not exclusive of each other. As shown in the upper rows of Table 2, however, the divergent pattern between positivity and negativity was clear in their respective contributions to algorithm reception, that is, how the legitimacy of algorithm is received. Positivity shows consistent, significant, and positive relationships to all three forms of algorithmic reception (bias, manipulation, and entities). A higher level of negativity was related to lower levels of affirmative algorithmic reception, respectively of manipulation and entities (β = −.10, −.16, p < .001), but coefficient did not reach significance for algorithmic bias.
Predictors of Algorithmic Reception.
Note. *p < .05. **p < .01. ***p < .001.
Discussion
The results from Study 1 support the fundamental premise of algorithmic emotion—that is, positive and negative emotional traits in their significant relationships to algorithmic exposure, and also in their distinctive functions to people’s reception of algorithm. The finding that positive emotion related to all three affirmative beliefs—(1) algorithmic rightness of commercial entities, (2) algorithmic neutrality as opposed to built-in bias, and (3) the preference for algorithm-manipulated information—however, raises the concern that automated algorithmic exposure, when emotionally registered positively, likely guides people’s judgment at the whim of controlling algorithm (Assunção et al., 2022; Gandy & Nemorin, 2019; Jussupow et al., 2020).
The finding that personal data were linked to both positivity and negativity foregoes simple interpretations. Still, the results seem to be consistent with the idea that personal data might be a fundamental part of building emotions related to algorithms. One can reason that data submission is in fact an unavoidable part of algorithm-based consumption generating a flood of automated information in social media—surely in both positive and negative tints. This was evident, as in the general algorithm exposure resulting in not only higher personal data endorsement, but also higher levels of both positive and negative emotions.
Study 2
Study 2 was conducted to replicate Study 1’s findings in different algorithm contexts. First, Study 2 examined the Facebook specific-platform context of personal data management, especially in its relationship to emotional correlates (RQ1). The investigation is to dissect discrete data management activities and detect whether each measure has a function on inducing emotional correlates, as found in Study 1. Second, in specific algorithm contexts, Study 2 sets out to test whether and how affirmative reception of algorithm relates to the two distinct emotional correlates (RQ2). This is also to replicate Study 1’s findings but in stand-alone algorithm applications, such as job candidate hiring, finance, or legal decisions.
Method and Measures
Study 2 used the two samples: (1) Facebook users and (2) the four sets of survey respondents to whom each of the four algorithm-related scenario questions was presented about (a) job-related algorithm decisions of hiring and discrimination (a. behavioral interview data use and b. job resume comparison), (b) consumer financial scoring, and (c) granting a prisoner parole based on computational program. Socio-demographics of these four subsets (n = 2,138, 2,174, 2,166, 2,146) were close to each other, resembling those reported in Study 1. Demographic characteristics of Facebook user sample (n = 3,410) also did not differ from the sample used in Study 1. All socio-demographics were used as covariates in analyses.
FB Data Management and Emotional Traits (RQ1)
Facebook users were asked about their involvement in personal data management activities, each of which was used as dichotomous variables in analyses. These four unitary items (Yes, coded as 1, and No as 0) were asked as part of a battery of questions following the prompt: In the past year, have you done any of the following things? Each item was: (1) taken a break from checking Facebook (M = 0.41, SD = 0.49); (2) deleted the Facebook app (M = 0.20, SD = 0.40); (3) adjusted Facebook privacy settings (M = 0.54, SD = 0.49); (4) downloaded personal data Facebook has collected about you (M = 0.08, SD = 0.28).
For emotional traits, two questions that respectively asked about the extent of positivity and negativity arising from their personal experiences with Facebook were used. The wording for this measure, on a scale of 1 (never) to 4 (frequently), was: How often does Facebook show you posts that remind you of a happy time in your life (M = 2.97, SD = 0.78) and a sad time in your life (M = 2.20, SD = 0.80).
Emotional Traits and Algorithmic Reception (RQ2)
In each of four algorithm scenarios, respondents were assigned to assess their level of reception of informational output. Two measures were used: (1) fairness and (2) effectiveness. The idea is to capture the two dimensions of one’s algorithm reception in normative values (fairness, i.e., what it should be) and utilitarian values (effectiveness, i.e., what it is) of particular algorithm outputs (Hoffmann, 2019). After each scenario described particular applications, respondents were asked: How fair/effective do you think this type of program would be at [insert each task]. These were rated on a scale of 1 (not fair/not effective at all) to 4 (very fair/very effective) (fairness, M = 1.98, SD = 0.86, effectiveness, M = 2.55, SD = 0.87, financial info; fairness, M = 2.36, SD = 0.84, effectiveness, M = 2.39, SD = 0.80, criminal info; fairness, M = 2.12, SD = 0.86, effectiveness, M = 2.22, SD = 0.82, job-hiring (a); fairness, M = 2.28, SD = 0.86, effectiveness, M = 2.37, SD = 0.82, job-hiring (b)).
In observing emotional traits, the two measures of positive and negative emotions were used. These are the same measures used in Study 1, with the aim to analyze whether the patterns of emotions observed of social media consumption will also likely sustain to influence one’s reception of particular stand-alone algorithm decisions. We include four scenarios used as Supplemental Material.
Results
Table 3 reports the results regarding RQ1. Several patterns emerge. First, the FB use frequency, measured as the general exposure to algorithm in a specific platform, were related to both positivity (FB-happy) and negativity (FB-negative). That is, the more exposure to FB algorithm via its frequent use, the more likelihood to experience both happiness and sadness in their FB uses—the same pattern found of algorithm exposure in Study 1. Second, importantly, however, the discrete data managements taken by individuals tended to display divergent emotional functions—the higher data management for negativity, as opposed to the lower data management for positivity. Although one item of adjusting privacy settings was associated with both emotions (β = .06, .07, p < .00), the contrast between general algorithm exposure and specific data management was apparent, as shown in the first and second blocks of hierarchical regressions in Table 3. For instance, while simply using FB was related to both emotions, active involvement in data management were more likely to be linked to negativity, such as in downloading data and taking a break from FB, lending support for the idea that individuals’ action taken against algorithm request for personal data likely relates to negative emotional correlates.
Relationship Between Personal Data Management and Emotional Trait (RQ1).
Note. Coefficients of socio-demographics are not shown. R2 for each block is accumulated. Hierarchical regression was used with socio-demographics as the first block, algorithm exposure as the second and data management as the third block.
p < .05. **p < .01. ***p < .001.
Our findings addressing RQ2 are displayed in Table 4. The results are straightforward. In receiving each of four algorithm decisions, the function of emotion was sizeable and consistent, as there were positive relationships between positivity and favorable evaluation about both algorithm fairness and effectiveness. In contrast, coefficients in the lower half of Table 4 show that negativity tended to be related to one’s poor evaluation of algorithm decisions. That is, as people felt negative about their general social media experiences, they also tended to disapprove algorithmic informational outputs in each of four algorithm applications. The result holds consistent for fairness—albeit marginal significance in job-hiring (a) and criminal judgment scenarios. Coefficients for effectiveness, however, did not reach significance except in their evaluation of a computerized resume comparison (job-hiring) (b)—suggesting that people’s judgment regarding fairness (normative beliefs) may be tied or stand out far more emotionally than effectiveness (factual beliefs).
Relationship Between Emotional Trait and Algorithmic Reception (RQ2).
Note. R2 for each block is accumulated, excluding the socio-demographic and algorithm exposure blocks.
p < 0.01. ^p < 0.10. ***p < .001.
Discussion
Study 2 replicated the two processes (data to emotion and emotion to reception) in the notion of algorithmic emotion proposed and tested in Study 1. The finding of Study 1 was replicated, but Study 2 discovered them using discrete measures (RQ1) and unmasked the patterns that were better detectable in isolation (RQ2). The most salient finding concerning RQ1 was that discrete data management efforts, when actively exercised, were related to FB-sadness, but reversely related to FB-happiness. This suggests that willful actions against data submission, such as downloading collected data, run counter to predetermined sets of courses that are programmed to produce positive rewarding experiences.
This is the important distinction from Study 1, which found personal data endorsement was strongly linked to positivity, whereas data management as discovered in Study 2 was linked to negativity. That is to say, the action against endorsing data use—as suggested in the studies of human-computer interaction (Brandtzaeg et al., 2022; Moon, 2000)—works against building a sanguine relationship between users and FB, because an integral function of FB’s algorithm, as well as one’s perceived relational tie to the platform, depends on personal data.
The remarkable consistency in the findings regarding RQ2 supports this reasoning, demonstrating that positive emotional traits might generate pervasive emotional ties to machine-based decisions—as displayed in the favorable algorithmic reception of fairness in a scenario involving sensitive financial information like credit scoring based on one’s behavioral data. After all, there seems to be no particular reason to defect from algorithm decisions when experiences are consistently rewarding. The finding that negativity in guiding algorithm decisions did not display significant roles as consistently as positivity implies that emotional instinct wired to a human brain might be more vulnerable to sweet palatability than bitterness (Gandy & Nemorin, 2019; Muramatsu & Hanoch, 2005). From this, we suspect that when it comes to emergent algorithm-mediated information, people might be more susceptible to automated ‘sweetness’ than ‘bitterness’, leaving them gullible to potential manipulation and bias.
Study 3
Study 3 was conducted to examine the consequential functions of algorithm reception—proposing that positive and negative emotional valences likely related to algorithmic reception will play critical roles in cultivating one’s acceptance of realities represented in algorithm (H2). Thus, Study 3’s focus is two-fold: (1) potentially positive and negative valences related to algorithmic reception and (2) the function of such reception on cultivating particular viewpoints—particularly when encouraged by positive algorithmic reception.
Method and Measures
Study 3 used the sample of social media users whose particular and combined platform experiences much depend upon proprietary algorithms (n = 4,316). This is the sample used in Study 1. Likewise, the same socio-demographic characteristics were used as covariates, and analyses controlled for personal data endorsement and general algorithmic exposure as in Study 1. Algorithmic reception, three receptive forms of algorithm beliefs used as dependent variables in Study 1, served as three independent variables in Study 3. Additional measures in Study 3 are as follows:
Perceived Accuracy of Algorithm Realities
This unitary binary (Yes/No) measure was to capture the extent to which respondents believed in the accuracy of algorithm realities represented in social media. The actual wording was: In general, would you say that the content posted on social media provides an accurate picture of how society as a whole feels about important issues? (M = 0.20, SD = 0.40). For analysis of this binary outcome, we used logistic regression.
Positive and Negative Emotional Valence
To determine the degree to which algorithms bring positive or negative consequence to individuals, Study 3 used the measure that asked respondents to evaluate their algorithmic experiences with automated social media post or news feeds, on a scale 1 (never) to 4 (frequently)—asking if they were positively or negatively evoked out of automated features. The wording was: How often, if ever, do you see the following things on social media? Following this prompt, a battery of three items were used for negative valence (posts that are overly dramatic or exaggerated; posts that appear to be about one thing but turn out to be about something else; people making accusations or starting arguments without waiting until they have all the facts). The responses to these items were averaged to create an index (M = 3.30, SD = 0.69; Cronbach α = .811). A unitary item was used for positive valence (posts that teach you something useful that you hadn’t known before, M = 2.91, SD = 0.77).
Results
Hierarchical regression analyses in Study 3 dissect the patterns of two relationships in H2: (1) emotional valences of algorithmic reception and (2) affirmative functions of such reception on cultivating particular viewpoints represented in social media. First, the left two columns in Table 5 show the relationships between two forms of algorithmic reception and emotional valences, lending support for the first part of H2 proposed in Study 3. The relationships were evident for negative valence, but with positive valence, only algorithmic entities were found to be significant.
Valence Consequences of Algorithmic Reception.
Note. R2 for each block is accumulated, excluding the socio-demographic block.
p < 0.05. **p < 0.01. ***p < .001.
Second, two forms of algorithmic receptions were also found critically related to perceived accuracy (odd ratio: 1.47, p < .000, 1.46, p < .00)—the higher receptive belief on algorithm legitimacy, the more likely to perceive the accuracy of algorithmic output. On the other hand, when it comes to the relationship between emotional valences and the perceived accuracy of algorithm realities, the contrast between positive and negative valences was unmistakable, as the perceived accuracy of realities was positively predicted by positive valence, but negatively by negative valence (in the middle column on the right side of Table 5).
Discussion
No causal claim is warranted with cross-sectional data. Still, Study 3 found that various forms of affirmative beliefs on algorithm were linked to positive-negative valences, as hypothesized in H3. The patterns of associations with negative valences indicate that negative experiences may stand apart from people’s overall experiences with algorithmic reception—once the legitimacy of algorithm is firmly registered on the part of a user. Negative experiences might be pronounced, seemingly clearly apart from algorithmic environment in which the exposure to algorithm is constantly inclined to invoke positive ‘sweet’ rewards such as convenience or free access. Further, the consistent relationships between perceived legitimacy of algorithmic entities and both (1) positive valence and (2) algorithmic accuracy show how critical it might be to understand algorithmic reception as more relational to its host platform than purely informational, as found in Study 2 (Brandtzaeg et al., 2022; Moon, 2000). Put differently, overall explanatory power appears far stronger, when one trusts the rightness of algorithmic entities (who) than when its algorithm legitimacy is informationally (manipulation, how) or technically accepted (bias, what).
The finding that positive valences tended to cultivate the perceived accuracy of algorithmic realities represented in social media is noteworthy. Particularly, the clear function of positive valences raises the concern that algorithm might work as ‘lock-in’ (Afifi et al., 2016; Gandy & Nemorin, 2019), as this pattern of reinforcement might solidify people’s reliance on algorithm environment. One might also call it ‘emotional spillover’, with positive valence consequential of affirmative algorithmic receptions spilling into people’s judgment in other areas of concern, such as the positive evaluation of information. After all, it is hardly possible to get nudged out of algorithm environment in which its core function—with the terms of data use—produces positive valence that is constantly too sugar-coded to reject.
General Discussion
The present study proposed the links between algorithm and emotions. The proposition was that emotions in algorithmic consumption, notably of social media, serve as a guiding principle—as algorithms stimulate or encourage rewarding experiences conducive of positive emotions. Study 1 tested and found that the general exposure to algorithms contributes to affirmative reception of algorithm indicated by one’s beliefs about the legitimacy of algorithm on (1) bias, (2) manipulation, and (3) entities. Study 2 replicated these findings in both FB-specific and stand-alone algorithm contexts of automated job evaluation, legal, and financial decisions. Study 3, on the other hand, examined positive and negative valences from such affirmative algorithmic receptions and found their respective contribution to perceiving information accuracy represented in algorithm-based social media.
Fundamentally, this study documented critical evidence of insights on vulnerability in a transition to algorithm-based ecosystems, as they overwhelm users with automatic feeds of constant reward and immediate convenience in exchange with personal data (Ananny & Crawford, 2018; Gandy, 2010; Gandy & Nemorin, 2019; Park et al., 2022). The notion of algorithmic emotion, which this study proposed, speaks to this insight that emotional appraisal might be algorithmically exacerbated by in favor of instant benefits when people are encouraged to forgo systematic efforts. This suggests that users can be susceptible beyond their individual-level decisions—socio-culturally and economically. For instance, users might be inclined for informational outputs of positive enticement, even when the decision is economically harmful or socially biased. Importantly, the fact that proprietary algorithms remain to be enclosed ‘black boxes’ leaves emotional gut feelings to be the sensible means to which to resort in evaluating informational outputs. This implies that users at the frequent exposure to algorithms will remain susceptible to potential manipulations, opening further dangers that built-in algorithm bias might seep readily through affirmative and trusty mindsets (Cheney-Lippold, 2017; Crawford & Schultz, 2014; Noble, 2018).
As discovered in Study 1, however, there seems to be no practical way to immune from susceptibility, given the role of personal data submission—a type of relational token in algorithm-based consumption—without which no one can participate meaningfully to use social media functions. Here it is critical to highlight one of Study 2’s findings that people’s actions to manage personal data tended to be related to negative emotions. While causal directionality is far from clear, it is clear that individuals’ willful attempt to regulate the extent of personal data submission will likely fail, so long as it is related to negative emotional correlates. Adding to this is that once social media use begins, one will have already become a data subject of algorithm decisions that are possibly subject to constant positive emotional correlates. In this sense, algorithmic emotion that this study documented seems to have the two lock-in mechanisms: (1) the reward of positivity and (2) the punishment of negativity, both of which turned out to associate with the extent of personal data submission (Assunção et al., 2022; Gandy & Nemorin, 2019; Li & Zhang, 2024).
While one should caution against overgeneralization, it is possible that algorithm likely promotes a Darwinism sense of decision-making—with emotional instinct serving as a primary mode (Assunção et al., 2022; Li & Zhang, 2024; Zhao et al., 2022). Surely, this study is limited in that it did not have a direct model comparison that put to test how a person’s rationality will spar with the person’s emotional correlates, but evidence is consistent across the three studies. Study 3’s finding in particular points to the insight that emotions might carry over to cultivate people’s evaluation about the accuracy of social media information, attesting prevailing instinctive judgment in algorithm-based environment, as they are now unavoidable ways to navigate uncertainties. The SCOT (social construction of technology) perspective is relevant here, as this study’s findings provide hints on future algorithmic design, that is, how algorithm must be re-constructed to deemphasize emotional reward on individual users. In this vein, SCOT scholars are best positioned to pose an even bigger structural question regarding how to structure algorithm as a social construct—namely, by understanding how algorithms as social structure reinforce its power (Barocas & Selbst, 2016; Cheney-Lippold, 2017; Neuman, 2016).
On a broader level, a conceptual refinement of this study can be taken as a call for human-centric, as opposed to technology/media-centric, research. The idea is that people have evolutionarily old brains wired with biological instincts regarding environmental stimulus. As in the case of this study, a human brain might have not fully adapted yet to such fleeting algorithm decisions. Thus, to the purpose of this study, understanding the limits of individual users as rational information processor (Bucher, 2017; Dwivedi et al., 2021) is requisite of understanding the posited (either harmful or beneficial) functions of algorithmic technologies and their features.
In fact, scholars (Eslami et al., 2019; Neuman, 2016; Park, 2022; Park et al., 2022) urged future studies to take up a task of so-called ‘decoding’ consumption processes from a socio-psychological standpoint—instead of presupposing that the power of new technologies will be imprinted directly on human minds. Taking this point seriously, however, this study did not tackle the issue directly. The modest proposal by this study is that there is a conceptual space for dissecting algorithm consumption from the standpoint of emotions, and by gaining the insights on affective aspect of bounded rationality, we will fully understand how deeper the structural problems of algorithm and its bias, observed of institutional behaviors (Barocas & Selbst, 2016; Crawford & Schultz, 2014; Hoffmann, 2019), can become as the problems of algorithm trickle down to produce potentially sinister effects in actual consumption.
How the algorithm exposures are operationalized raises criticism on the current study. The question about whether a person’s exposure to algorithm can be fully captured through cross-sectional surveys, especially via (1) the number of active social media use and (2) personal data endorsement, is warranted. With no merit of existing algorithm exposure measure, however, this study addresses the concern in the following ways. First, it is acknowledged that no social media consumption is virtually possible without any exposure to algorithm. When it comes to personal data, social media and their algorithms would not function meaningfully with no personal data input endorsed from the part of individual users. This is like driving automobiles. Although an engine, hidden under the hood, is not equivalent to a car, driving supposes the use of engine, which in the case of social media can be imagined as algorithm. As much as driving a car involves more than the function of its engine, however, there remains the need to develop more precise measures that enable systematic observation of automated features, as in automated news-post feeds in Study 3. Extant literature is ambiguous about this, particularly regarding separating algorithmic exposure from social media use.
Future works will need to address this limit by developing new items, as this study is hamstringed by the use of secondary data. The limits of field surveys must be also compensated by lab-experiments, which make stronger causal cases because one can better isolate emotional traits, for instance by placing people in emotionally arousing algorithmic situations. Another potential is that the inclusion of other moderating variables such as digital literacy or AI awareness will need to add an insight on how emotion will play out when users possess certain understandings (see Gruber et al., 2021; Wang et al., 2024). As this study sets out to test antecedents and consequents of emotions in mundane algorithmic exposure contexts, however, we are rather encouraged by the prospect of experimental studies following up to replicate the roles of emotions in algorithm consumption.
This study concludes by cautiously forecasting that individual control in guarding against algorithm will be difficult to hold. This is likely so, because limited cognitive capacities facing an influx of automated decisions will cater to the reliance on algorithms constantly nudging positive emotional appraisal. It seems the case that deliberate considerate responses in algorithm-based environments remain a rare exception, not a norm, as this study’s findings invite a rethinking of how decisions are being shaped and influenced by emergent algorithm-based forms of institutional conditions.
Supplemental Material
sj-docx-1-sgo-10.1177_21582440261423010 – Supplemental material for Algorithmic Emotion, Personal Data, and Informational Judgment
Supplemental material, sj-docx-1-sgo-10.1177_21582440261423010 for Algorithmic Emotion, Personal Data, and Informational Judgment by Yong Jin Park in SAGE Open
Footnotes
Acknowledgements
The author expresses his gratitude to RSM, Harvard Berkman Klein Center, as he remains very thankful to vibrant discussions at BKC, Harvard Law.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The kind support from Mozilla Foundation (Responsible Computing) is acknowledged.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
All data used in our analyses are publicly available and verifiable at (Pew Internet, 2018): ![]()
Supplemental Material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
