Abstract
The “automaticity dominance” perspective on cognition and behavior holds that automatic processes guide most behavior because deliberate processing is slow, inefficient, and therefore rare, typically restricted to “problematic” situations. Other scholars argue on both theoretical and empirical grounds that deliberate processing is more common. In this study, the authors test automaticity dominance by using multinomial processing tree models to examine donation decisions in an online sample of 1,027 respondents. Using a mixture of preregistered and exploratory analyses on both experimental and observational data, the authors find that (1) the processes underlying donation behavior execute efficiently and rapidly, but key processes are also controllable; (2) deliberate cognition increases in problematic situations but also operates when levels of problematicity are low; and (3) respondents deliberately control (at a minimum) a substantial minority of their decisions. These results indicate that deliberate cognition might not be as rare as an automaticity dominance perspective suggests.
The idea that human behavior is guided by internal forces that are neither entirely accessible nor subject to our rational thought processes has shaped sociological scholarship ranging from foundational theories (Durkheim, Cosman, and Cladis 1995; Weber 1981) to modern treatments of social reproduction, education, culture, gender, and race (Bourdieu 1984, 1990; Dumais 2002; Hall 1993). This work has been bolstered by findings from the cognitive sciences that suggest that thought processes that are fast, efficient, automatic, and/or unconscious play a major role in shaping how we interact with the world around us (Cerulo 2010; Solso, MacLin, and MacLin 2005).
Scholars agree that much of the cognitive work humans perform occurs automatically: we perceive and classify spontaneously, and sometimes respond to stimuli quickly, easily, and unconsciously. However, there is considerably less agreement about the degree to which automatic cognition guides behavior relative to deliberate, controlled cognitive processes. One major approach emphasizes cognitive processes that are rapid, automatic, and/or unconscious. Deliberate control of behavior is assumed to be slow, effortful, and conscious, and consequently is much rarer, often restricted to challenging situations (Bourdieu 1990; Joas 1996; Lizardo 2021; Miles 2015; Vaisey 2009; cf. Cerulo, Leschziner, and Shepherd 2021; Miles 2019) Other sociologists have criticized this perspective. Arguments vary, but all suggest that deliberate processing is both more common and has more influence on behavior than an “automaticity dominance” approach implies (Elder-Vass 2007; Hitlin and Johnson 2015; Leschziner and Brett 2019; Mische 2014; Vila-Henninger 2015).
If these critiques are correct, then the many theories and applied practices that take strong automatic influence as a premise might need to be revised. This would be crucial given that work on automatic processing has garnered significant attention from leaders in education, government, health care, and business. The idea that automatic processes significantly influence our behavior has shaped how organizations train leaders, promote innovation, create policies and guidelines, and combat equity-related problems like underrepresentation of marginalized groups (e.g., AAMC 2023; Department of National Defence 2021; Dobbin and Kalev 2018; North n.d.; Onken, Chang, and Kanwal 2021; Verghis 2016). Organizations ranging from government agencies to multinational corporations invest substantial time and money in initiatives built on the premise that automatic processes are a powerful determinant of behavior (e.g., implicit bias training; Kirkland and Bohnet 2017; StrategyR 2021). Thus, the implications of understanding how cognition affects behavior reach far beyond the confines of academia.
In this article we test the relative merits of these “automaticity dominance” and “deliberation friendly” perspectives by assessing the claim that the influence of deliberate cognition on behavior is rare. We begin by reviewing competing claims about deliberate processing and highlight two assumptions in accounts that describe a restricted role for deliberate control of behavior—that deliberate processing is slow and inefficient, and that it is largely restricted to problematic situations. We assess the validity of these assumptions using data on donation decisions from a large, online sample, and build on these results to estimate the prevalence of deliberate processing among our respondents. We conclude with a discussion of what our results mean for sociological work on cognition and action.
Automaticity Dominance and Its Critics
The idea that much of our behavior is guided by cognitive processes that are fast, effortless, and/or unconscious has a long history in sociology, featuring in the writings of foundational theorists and later being popularized through the work of scholars such as Giddens and Bourdieu (Bourdieu 1990; Dewey 1922; Giddens 1984; Weber 1920). More recent work has reinforced the notion of automaticity dominance among sociologists by grounding it in modern cognitive science (Cerulo et al. 2021; Lizardo 2004; Lizardo and Strand 2010; Vaisey 2009). Belief in the power of automatic processes has no doubt also been bolstered by the rising prominence of work on priming and implicit cognition in both academic and public circles (e.g., Quillian 2008). Currently, the idea that automatic cognition is a powerful force guiding behavior appears in many of sociology’s substantive subfields including gender, race, education, crime, and culture (Beckman et al. 2018; Cerulo et al. 2021; Friedmann and Efrat-Treister 2023; Gaddis 2013; Melamed et al. 2019; Quillian 2008; Ridgeway and Kricheli-Katz 2013; Rivers, Gibbs, and Paternoster 2017; Warde 2014).
Space limitations make it impossible to review the many accounts of automatic influence that have been advanced. Instead, we describe Vaisey’s (subsequently named) sociological dual-process model as a paradigmatic case (Vaisey 2009). The sociological dual-process model is a useful starting point both because it makes a clear case for automaticity dominance, and because it is widely referenced, both by those who favor automaticity dominance and those who critique it (Cerulo et al. 2021; Leschziner and Brett 2019).
Vaisey (2009) argued that “actors are driven primarily by deeply internalized schematic processes” but are capable of “deliberation and justification . . . when required by the demands of social interaction” (p. 1687). This is what cognitive scientists refer to as a default-interventionist model. Learned mental structures—schemas—allow people to automatically respond to situations with a feeling of attraction or repulsion that is generally sufficient to guide behavior (the default), but deliberate processes can intervene to control action when automatic processes fail to produce an adequate response. Borrowing Haidt’s (2006) well-known metaphor of a rider on an elephant, Vaisey argued that deliberate cognition is like a rider that is generally “not in charge” and “no match for the elephant [automatic cognition] in a direct struggle” (p. 1683). Consequently, action is driven “primarily” by automatic processes, while deliberate cognition is generally restricted to offering post hoc justifications for actions undertaken by the elephant.
As articulated by Vaisey, the heart of an automaticity dominance perspective is the idea that automatic processes are the predominant influence on action, whereas deliberate processes only rarely exert control. This claim typically rests on a few assumptions. The first is that automatic cognition has characteristics that give it an advantage over deliberate processes in influencing behavior. Deliberate cognition is often described as being “more cognitively expensive” or “high effort” compared with automatic cognition, which in contrast is described using terms such as “rapid”, “easy”, and “low effort” (Evans 2008:257; Haidt 2001:820; cf. Lizardo et al. 2016). If, as commonly assumed, people are cognitive misers who prefer to expend as little cognitive energy as possible when accomplishing tasks, then it follows that they will avoid deliberately controlling their behavior whenever possible (Shaw 2021; Toplak, West, and Stanovich 2014). Furthermore, automatic cognition executes more quickly than deliberate cognition, and is often assumed to operate unconsciously (e.g., Lizardo et al. 2016; Moore 2017; Shaw 2021; Vaisey 2009). Both characteristics should provide automatic processes with an advantage in guiding behavior. In short, automaticity is thought to dominate behavior because it is faster, easier, and less noticeable than deliberate processing.
A second assumption is about when deliberate processing intervenes in the behavioral process. Most scholars, including critics of automaticity dominance, agree that deliberation is more likely if individuals are motivated to control their behavior, or whenever automatic responses prove inadequate to the demands of a situation (Fazio and Olson 2014; Leschziner and Brett 2019; Vaisey 2009). Authors have described such situations using terms such as “complex” (Vila-Henninger 2021b), “problematic” (Gross 2009), “difficult” (Vila-Henninger 2015), “unsettled” (Lizardo and Strand 2010; Swidler 1986), or, less succinctly, as having “unstable (or non-existent) socio-cultural cognitive scaffolding” (Lizardo and Strand 2010). However, what constitutes a “problem” is often not specified (but see Lizardo and Strand 2010), nor is it clear how challenging a situation must become before it requires deliberate intervention. The tacit message among proponents of the automaticity dominance perspective is that automatic processes are generally adequate, particularly as individuals commonly operate in stable, predictable social contexts (cf. Lizardo and Strand 2010). People resort to deliberate processing only on the rare occasions when the demands of the situation leave them no viable alternative.
Critics have questioned whether automatic cognition is as influential as the automaticity dominance perspective suggests. 1 Although few would deny that automatic processing provides essential supports for action (e.g., via perceptual and interpretive processes), or that some behaviors can be executed without conscious intent, these scholars take issue with the idea that automatic cognition guides action as frequently and/or powerfully as an automaticity dominance perspective suggests. One common form of critique is to empirically demonstrate instances where people use deliberate cognition. Researchers have identified deliberate processing in a variety of activities ranging from cooking to moral decision making (Leschziner and Green 2013; Vila-Henninger 2021b). Notably, this includes cases where individuals voluntarily engage deliberate processes to achieve their goals despite being in stable (i.e., putatively nonproblematic) contexts (Leschziner and Green 2013). This work also explores how automatic and deliberate process interact in behavior and decision making. Rather than relying on one or the other type of cognition, individuals often seem to move back and forth between automatically generated (sometimes embodied) feelings and thoughts and deliberate thought as they make sense of phenomena and determine how to behave (Cerulo 2018; Elder-Vass 2007; Hitlin and Johnson 2015; Leschziner 2019; Moore 2017).
Building off this empirical evidence, some critics have proposed replacing a default-interventionist approach with cognitive models that place greater emphasis on the interaction between deliberate and automatic processes, such as the iterative reprocessing or tripartite models (Cerulo 2018; Leschziner and Brett 2019). Reviewing the details of these models is beyond the scope of this article, but in broad strokes they suggest that deliberate processing might be common because it is required anytime a person must integrate multiple inputs from the environment, which might include one or more intuitions, memories, feelings, ambiguous stimuli, and goals (Leschziner and Brett 2019; Stanovich, West, and Toplak 2014; Vila-Henninger 2015, 2021b). How this integration occurs is not fully clear, but some work suggests that it might involve iterating between different types of cognitive processes (Cerulo 2018; Cunningham et al. 2007). Of course, frequent iteration and synthesis would only be practically useful if it could be managed quickly and easily enough to allow individuals to efficiently navigate the behavioral demands of everyday life. Thus, the alternative models offered by critics imply that at least some forms of deliberate processing—although still slower and less efficient than automatic processes—are nonetheless fast and efficient enough to be used on a regular basis.
Scholars have also challenged automaticity dominance by examining more closely the relationship between deliberate cognition and problematic situations. The primary finding here is that deliberation does not seem to be restricted to highly problematic contexts, but routinely operates even when situations are not particularly novel, ambiguous, or otherwise challenging (Leschziner and Brett 2019; Leschziner and Green 2013; Vila-Henninger 2015, 2021b). This might be because problematic situations are not the only way to activate deliberate processes: people could choose to think or act in a deliberate fashion on the basis of internal motives or goals, for instance. Additionally, even when challenges are present the level of problematicity needed to evoke deliberating processing might be low, as Luft (2020) surmised. This could lead to frequent deliberation, particularly if even seemingly stable situations are rife with ambiguities, violated expectations, and interactional negotiations (Brett 2022; Winchester and Green 2019). These arguments suggest that one avenue toward resolving the question of how frequently deliberate cognition is used is to determine how often individuals experience situations as challenging or otherwise problematic: frequent problems would imply frequent deliberation. Relatedly, we can examine the level of problematicity: must situations contain highly disruptive elements, or can deliberation be triggered by weakly problematic situations that only contain low levels of novelty, ambiguity, and other challenges?
Identifying Automatic and Deliberate Cognition
Unfortunately, resolving disagreement about the prevalence of deliberate influence is complicated by the fact that both deliberate and automatic cognition are not as simple to distinguish as sociologists originally thought (Cerulo et al. 2021:70; Melnikoff and Bargh 2018). Published work commonly describes automatic cognition as unconscious, automatically primed, rapid, low effort, high capacity, associative in nature, and so on, and describe deliberate cognition as conscious, controlled, slow, high effort, low capacity, rule-based, etc. (Evans 2008; Lizardo et al. 2016; Moore 2017; Shaw 2021; Vaisey 2009; Vila-Henninger 2021a). Under such a model, it is easy to identify a process by determining whether one of the relevant characteristics is present. Thus, Moore (2017) identified automatic and deliberate cognition using fast vs. slow response times, while Miles (2015) isolated automatic cognition by placing respondents under cognitive load to restrict their ability to engage in high effort thinking.
However, this “list of features” approach is challenging to sustain when multiple characteristics are tested simultaneously. In such cases, characteristics do not always co-occur as predicted. For instance, one can find processes that seem to be unintentional but inefficient or controllable, and other processes that are uncontrollable but intentional (Gawronski, Sherman, and Trope 2014; Melnikoff and Bargh 2018). The fact that the “canonical” features of automatic and deliberate cognition do not reliably co-occur means that we cannot identify automatic or deliberate cognition simply by determining if one of its assumed characteristics is present.
We suggest that a tractable way forward is to distinguish between “defining features” and “common correlates” of automatic and deliberate cognition (Evans and Stanovich 2013; Pennycook 2018; Stanovich and Toplak 2012), an approach that has been gaining traction among sociologists studying cognition (Boutyline and Soter 2021; Brett 2022; Cerulo et al. 2021; Ignatow 2021; Leschziner 2019; Miles 2019; Miles, Charron-Chénier, and Schleifer 2019). In this view, automatic cognition is characterized by its autonomy: its mandatory, uncontrollable execution when relevant cues are encountered. 2 Automatic cognition is often rapid, low effort, and so on, but these are not its defining feature; they are common correlates. Deliberate cognition, on the other hand, is characterized by “cognitive decoupling”: the ability to distinguish between the real and the hypothetical. Cognitive decoupling undergirds humans’ ability to evaluate competing inputs, imagine possible outcomes, and decide between alternative courses of action. This requires working memory resources, and so can be slow, cognitively demanding, or exhibit other features associated with deliberate processing, but need not do so in all cases.
Underlying both automaticity and cognitive decoupling is the issue of control. This is immediately apparent in the case of automaticity, which is defined as uncontrollable activation of a process. However, cognitive decoupling also implies control because engaging in hypothetical thinking about behavior requires holding automatically activated impulses in check while a person decides what to do. This process might be as simple as approving an automatically activated inclination, but can also involve searching for and organizing relevant information, synthesizing inputs from both internal and external sources, considering possibilities, and so on (Evans 2017; Stanovich and Toplak 2012; Vila-Henninger 2015). At that point, the decision must be implemented, which again implies intentional control, this time of the behavior itself. Given this, it is no surprise that some authors refer to deliberate processing as “controlled” cognition (Olson et al. 2022; Payne 2001; Schneider and Chein 2003). Control, then, is an important key to determining whether a process is automatic or deliberate. 3
Several points should be made. First, automatic activation is a narrower definition of automatic cognition than is typically used by sociologists, who often also emphasize the role of automatic processing in embodied practices and skills (e.g., Ignatow 2009; Lizardo and Strand 2010; Wacquant 2004). We do not see this as a major flaw, as the defining features approach can be easily expanded to include skills by redefining the core feature of automatic cognition as autonomy writ large. This would still include uncontrollable activation but also add autonomous execution; that is, the ability to execute a process without exerting deliberate control (cf. Miles 2019). Second, neither automatic activation of a process nor automatic execution necessarily means that an entire process is uncontrollable. Once activated, some processes quickly complete and leave situationally relevant representations available for further processing (Evans 2017), whereas others can be interrupted, overridden, and/or put under deliberate control. Practically, this means that observing intentional control of behavior indicates the presence of deliberate processing but does not imply an absence of automatic processing. 4 Finally, we acknowledge that the defining features approach, like the idea of dual processes itself, represents a simplification of the cognitive landscape (Evans 2017; Vaisey and Frye 2019). The cognitive processes that are commonly grouped under the headings of “automatic” and “deliberate” (or their synonyms) differ in many ways, and the presence or absence of intentional control is just one way that these processes can be distinguished. The utility of this division lies in the fact that it captures an element common to most descriptions of automatic and deliberate processing, and so is useful whenever our question involves contrasting these two categories, as in the present case. But other questions might be better served by categorizing cognitive processes on the basis of distinctions other than control, such as speed, consciousness, or resource dependence.
Testing Deliberate Cognition
The preceding discussion illustrates that the claim that deliberate control of behavior is rare depends on a series of earlier assumptions about how and when deliberate and automatic cognition operate. In short, intentional control of behavior is uncommon because deliberate processing is slow and inefficient, and so is generally employed only in problematic situations that cannot be addressed through automatic thinking and action routines. We therefore test the claim that deliberate influence is rare first by evaluating the accuracy of these assumptions. We then leverage the results of these tests to provide an (admittedly rough) estimate of the frequency of deliberate processing among our respondents. Stated differently, we aim to answer the following questions, with questions 1 and 2 laying the groundwork for answering question 3:
Are deliberate (controlled) processes slow and inefficient?
How does deliberate (controlled) processing relate to problematic situations?
How often do people intentionally control their behavior?
We address these questions using data on charitable donations. Charitable donations are useful to study because they are both common and socially consequential, with donations providing billions of dollars annually to support causes ranging from education to international aid (Charities Aid Foundation 2019; Gallup 2020; National Philanthropic Trust 2022). Existing work on charitable donations also hints that donation behavior might be shaped by automatic and deliberate processes, making donations a good case for studying their relative prevalence (Costello and Malkoc 2022; Gneezy, Keenan, and Gneezy 2014; Kessler, Kivimaki, and Niederle 2017; Small and Loewenstein 2003; Smith, Faro, and Burson 2013; Unger, Papastamatelou, and Arpagaus 2022).
Methods
Isolating deliberate and automatic cognition is often accomplished using experimental manipulations (e.g., Miles et al. 2019), yet this complicates efforts to determine how cognitive processes shape behavior under normal conditions. Because our primary aim is to determine the prevalence of deliberate processing in everyday behavior, we center our research on a task that allowed respondents to make donation decisions in a relatively natural way. Our research strategy involves first modeling the processes that contribute to donation behavior, and then determining which of these processes plausibly operates via deliberate cognition. This information makes it possible to determine whether deliberate processes are also slow and inefficient, how deliberation relates to problematic situations, and ultimately examine how frequently deliberation cognition shapes donation behavior. In this section we describe our data, explain the donation task, and outline our model of the processes leading to donation behavior. Details on tests designed to answer the three research questions are given in the relevant sections below.
Data come from a sample of 1,652 U.S. residents drawn from Connect, an online research platform that has been found to produce higher quality data than comparable providers (Douglas, Ewell, and Brauer 2023). 5 All data collection procedures were preregistered on the Open Science Framework and are available at https://osf.io/dcqm2/. Respondents self-selected into the sample after reading a short description of the study. The study was expected to last 10 minutes, and respondents were paid $1.75. In addition, respondents were given the opportunity to earn additional money either for themselves or for a charity during a donation task, which we describe below. After preregistered data exclusions, a total of 1,027 respondents remained. 6 Of these, 415 were randomly assigned to experimental conditions (described under “Stage 1: Testing Control, Speed, and Efficiency”), and the remaining 612 made up our primary analysis sample, which were refer to as our “main” sample in what follows.
Donation Task
The core of the study was a donation task. At the beginning of the task, respondents learned that they would make decisions about whether to give money to a charity or keep money for themselves. In each task trial, respondents were shown the logo of a charity and a small amount of money that they could choose to either donate to the charity or keep (see Figure 1). Decisions were entered via keystrokes, and the task immediately proceeded to the next trial after a response was entered. Charities alternated between four organizations that pretesting suggested elicited similar rates of donation and that respondents viewed equally favorably (St. Jude’s Children’s Hospital, Feeding America, Doctors without Borders, and Habitat for Humanity). 7

A sample trial from the donation task.
The amount of money at stake, which we refer to as the baseline amount, was $1, $2, or $3. Respondents were told that if they chose to donate the baseline amount, the charity would receive the amount listed in a box labeled “If you donate, charity gets,” which could be the same as, more than, or less than the baseline amount. They were informed that this “mimics typical donation situations where overhead costs reduce the actual amount of money that goes to a cause, or, alternately, situations in which charities are able to use ‘matching’ donations to increase your impact.” These modifications to the baseline amount fell into one of five categories: large increases (charity gets 80 percent to 100 percent more than the baseline amount), small increases (charity gets 10 percent to 20 percent more), no change (charity gets the same amount), small reductions (charity gets 9 percent to 17 percent less than the baseline amount), and large reductions (charity gets 44 percent to 50 percent less). Note the reduction amounts are the reciprocals of the increase amounts: for example, increasing the baseline amount by 100 percent is accomplished by multiplying the amount by 2, while the equivalent reduction is given by multiplying the baseline amount by the reciprocal, which is ½ = 0.50, or a 50 percent reduction. The variation in baseline amounts and modifications were designed to create variation in donation rates across conditions (cf. Portillo and Stinn 2018). In total, each respondent completed 120 trials. Of these, the 96 trials that involved modifications were retained for analyses.
Respondents were informed that after the task one trial would be randomly selected for payment. If they chose to keep the money during that trial the appropriate amount would be paid to them as a bonus in addition to the amount promised for completing the study. If they chose to donate the money during that trial the relevant amount would be donated to charity by the researchers on their behalf. Although not always consistent, previous research suggests that respondents generally behave the same way when a single decision is selected for payment compared with when all decisions are paid (Bardsley et al. 2010; Charness, Gneezy, and Halladay 2016). Single-trial payment also helps prevent respondents from changing their behavior in response to previous patterns of earnings or donations (i.e., wealth or portfolio effects).
Modeling Strategy
Data from the donation task were analyzed using a multinomial processing tree (MPT) model (Calanchini et al. 2018). MPT models allow researchers to specify how the processes theorized to underlie a behavior relate to one another in in a tree-like structure of dependence. These relationships are expressed as a series of equations which can be solved to produce estimates of the probability of relying on each process. Miles et al. (2023) and others have demonstrated how MPT models can be used to determine how much influence different cognitive processes exert on behavior (Calanchini et al. 2018; Payne and Bishara 2009). Thus, MPT models are an ideal method for testing the cognitive underpinnings of donation behavior.
Our MPT model is shown in Figure 2. This model is the product of a preregistered process of model selection and simplification, which for brevity we detail in Appendix B.

Donation multinomial processing tree model (small modifications).
The model in Figure 2 has three types of parameters: S, BA, and E. S is designed to capture engagement with the task, which we refer to as being “sensitive” to the study design. Respondents who are sensitive to the study design follow the S path and decide what to do on the basis of features of the task, that is, on the basis of either the baseline amount or the modification. The parameter BA represents the probability that they decide whether to give or keep based solely on the baseline amount. Alternatively, with probability 1 − BA they ignore the baseline amount and respond on the basis of the modification. Respondents who are not sensitive to the study design follow the 1 − S path and either donate with probability E or keep the money with probability 1 − E. E and 1 − E operate as “catch-all” paths that capture any reasons for donating or keeping other than those captured by the S, 1 − S, BA, and 1 − BA paths. These might include additional processes that are not in the model, and/or responses to baseline amounts and modifications that deviate from our model predictions.
Model parameters estimate the probability of relying on the different processes depicted in the model. For convenience, we have defined these parameters with reference to the function they perform (e.g., the BA parameters estimate the probabilities of relying on the baseline amount). We assume, however, that these functions are executed by one or more underlying cognitive processes, such as those governing attention, decision making, and so forth. It is these cognitive processes are of primary interest, and we assume that our analyses of model parameters provide information about the cognitive processes that they represent.
The right side of the model in Figure 2 shows each unique combination of baseline amounts and modifications and makes predictions about how respondents will behave in each condition, depending on which path through the tree they follow. The equations along the bottom of Figure 2 formalize the relationships among processes. Figure 2 only displays predictions and equations for task trials involving small modifications (e.g., small increases of decreases to the amount donated to charity). Both predictions and equations are the same when modifications are large, except that the E and S parameters are allowed to vary because the probabilities with which people rely on these processes often differ when modifications are large (see Appendix B). Although logically the BA parameter might also be expected to vary, the model selection process reported in Appendix B indicates that it generally does not. The exception is when a $2 or $3 baseline amount is combined with a large modification; these combinations produce estimates that do not significantly differ from one another but do differ from other BA parameters. They are therefore estimated with a separate parameter. The model, then, estimates six parameters: BA, BA23_bigmod (for $2 or $3, large modification), E1, E2, S1, and S2.
Analysis Plan
Our analysis proceeds in three stages that roughly correspond to our three research questions. Each stage involves a different analytic approach and the results from one stage feed into subsequent stages. We therefore describe the methods and results from each stage separately. We also describe which analyses were preregistered and which were not. Full details on how the results in the manuscript relate to our preregistration can be found in Appendix A.
Stage 1: Testing Control, Speed, and Efficiency
Methods
Our first goal is to determine whether controlled processing is slow and inefficient, as assumed by an automaticity dominance approach. This means we must determine how controllability, speed, and efficiency are associated with the processes captured by the parameters in the model.
Following our preregistration, we tested processing features by randomly assigning approximately 100 respondents to each of six experimental conditions, and then testing to see if experimental manipulations changed the model parameters compared with a model fit to our main sample. Differences between the main and experimental conditions were assessed using χ2 tests (Riefer and Batchelder 1988). 8 Our first four conditions examined controllability by instructing respondents to selectively attend to some information presented on each task trial while ignoring other information. The parameter estimate associated with the information that is ignored should decrease if the underlying processes are controllable but remain unchanged if they are not. Estimates that do not change signal that the processes they represent affect decisions despite respondents’ intentions to disregard them.
In the first condition, respondents were asked to just use information about how much money the charity would receive if they chose to donate (the modification), and to ignore information about how much money was at stake (the baseline amount). If the processes associated with the baseline amount are controllable, then BA and BA23_bigmod should decrease. In the second condition, these instructions were reversed: respondents were instructed to base their decisions only on the amount at stake and ignore the amount the charity could receive if they chose to donate. If respondents can successfully ignore this information, the estimate for 1 − BA will decrease (i.e., BA parameters will increase). In the third condition, respondents were told to ignore extraneous influences, that is, any influence other than the information presented in the task, such as their personal financial situations, current mood, and so forth. This should decrease estimates of 1 − S (i.e., increase S). In the fourth condition, respondents were instructed to ignore the task information and base their decisions entirely on personal considerations such as their personal financial situations, mood, and so on. Control over the processes governing sensitivity to the task should lead to decreased estimates for S parameters.
Note that tests of uncontrollability selectively target the cognitive processes underlying the model pathways BA, 1 − BA, S, and 1 − S, but not E and 1 − E. This is because it is not obvious how to test whether E and 1 − E are controllable because it is unclear which processes they represent. As we will see in stage 3, this will introduce some uncertainty into our estimates of how often respondents rely on deliberate processing.
Once we have established which processes are controllable, we can assess whether those same processes are inefficient and slow. The fifth experimental condition tested efficiency by asking respondents to complete the task while remembering a nine-digit random number. This number was changed periodically to prevent respondents from habituating. Random number–based cognitive loads have been used extensively in cognition research and are thought to occupy working memory resources (Bargh and Chartrand 2000; Miles 2015). Consequently, any process that is unaffected by the load must operate efficiently, that is, using minimal working memory resources. Estimates that decrease signal that the processes they represent are less efficient.
Our sixth condition assessed processing speed by asking respondent to “push” themselves to respond as quickly as possible. Time pressure is also a commonly used manipulation to examine processing characteristics (Bargh and Chartrand 2000; Cameron et al. 2017; Fazio 1990). Processes that are unaffected by the rapid pace of the task are more likely to execute quickly, while those that become less influential require more time to operate.
After preregistered data exclusions, the final sample sizes for all conditions were nmain = 612, nign_amount = 78, nign_modification = 71, nign_extraneous = 65, nign_task = 61, nnum_load = 47, nspeed = 93. 9
Results
We can determine which processing features are associated with the model parameters by comparing estimates from the model fit to the main sample to models fit to data from each of the six experimental conditions. Results are shown in Table 1. Parameter estimates from the main sample are shown along the left side of Table 1 and estimates from identical models fit to each experimental sample are shown in columns to the right.
Estimates of BA, E, and S Parameters for Models Fit to the Main and Experimental Samples.
Note: All χ2 tests have 1 degree of freedom. Est = estimate.
Estimates from models fit to data for respondents who tried to ignore the baseline amount differ substantially from those from models fit to the main sample. Notably, BA dropped from 0.09 to 0, signaling that respondents were able to comply with task instructions and disregard the baseline amount when making decisions. This indicates that the processes associated with evaluating the baseline amount are controllable. Exercising deliberate control over these processes changed other aspects of how respondents behaved, as seen by changes in the other parameters, but we leave readers to peruse these on their own as this experimental condition was not designed to test the controllability of these processes.
Table 1 also shows that respondents were also largely able to ignore the modifications. Estimates of BA increased from 0.09 to 0.79, while estimates of BA23_bigmod increased from 0 to 0.49, both signaling an increased probability of deciding on the basis of the baseline amount instead of the modification amount (i.e., estimates of 1 − BA decreased substantially). Additionally, S2 is higher than S1 in the main sample (0.46 vs. 0.29), indicating that large modifications weighed heavier in respondent decisions than did small modifications. This difference disappears among respondents asked to ignore the modification amount (0.33 vs. 0.30), as we would expect. However, the fact that neither 1 − BA nor 1 − BA23_bigmod dropped to 0 indicates that respondents might not have been altogether successful at ignoring modifications. Substantively, these results indicate that the processes evoked by the modification amounts are largely, but perhaps not entirely, controllable. 10
The ignore extraneous influences condition was intended to increase the extent to which respondents engaged with the task. Consistent with this, the S parameter increased from 0.29 in the main condition to 0.45 in the ignore extraneous influences condition when modification amounts were small (S1), and from 0.46 to 0.58 when modification amounts were large (S2). The fact that these estimates increased supports the claim that respondents’ ability to attend to the task is controllable. However, the fact that neither estimate reached 1 could suggest that this control is imperfect.
The ignore task information condition was meant to increase decision making on the basis of extraneous factors. The results suggest that this occurred. The S parameters decreased to about half the magnitude of parameters from the model fit to the main sample (and 1 − S parameters increased). When modifications were small (S1), estimates changed from 0.28 to 0.14. When modifications were large (S2), estimates were 0.47 in the main condition and 0.24 in the ignore task information condition. This is consistent with the claim that ignoring task information is under respondent control. Neither estimate is 0, which could indicate that control is partial. Furthermore, if respondents were fully ignoring the task information, there should be no difference between estimates for small and large modification trials; both would have been ignored equally. However, the difference between S1 and S2 in the ignore task information condition is statistically significant (χ2 = 6.97, df = 1, p = .008). Thus, the most conservative conclusion is that control is partial.
We turn next to estimates from the cognitive load condition which tests efficiency. The largest change is for BA, which doubles in size from 0.09 to 0.19. Substantively, this suggests that under cognitive load respondents based their decisions more often on the baseline amount compared with those not under load, and less often on the modifications (1 − BA). This could suggest that considering baseline amounts is a less cognitively demanding strategy for making donation decisions than using modifications. However, even under load respondents were much more likely to base their decisions on modifications than on baseline amounts (1 − BA = 0.81 vs. BA = 0.19). Other estimates change little between the main and cognitive load samples, though changes for E1 and S2 are significant. More important, none of the estimates in the cognitive load sample approaches 0, which would signal that the processes ceased to be influential under load. This suggests that all the processes under study can execute efficiently.
The same general pattern holds when examining estimates from the speed condition. Compared with the main sample, estimates in the speed sample change little, though changes to both E2 and S2 are significant. Of these, the change in S2 is the largest in magnitude, dropping from 0.46 to 0.39, and so is the most likely to reflect real change. Substantively, it would signal that engaging with the task elements when modifications are large might require a little more time. However, even this estimate remains far from 0, suggesting that all the processes under study can execute rapidly.
A summary of results from stage 1 is given in Table 2. All tested processes appear to be controllable to at least some degree—more so in the case of processes associated with the baseline amounts and modifications, and perhaps less so when the processes have to do with task engagement. Despite being controllable, model processes also execute efficiently and rapidly. These results are not consistent with the automaticity dominance perspective, which generally assumes that controlled processes are slow and inefficient.
Controllability, Efficiency, and Speed of Processes Underlying Model Parameters.
Note: NT = not tested; Y = yes.
Stage 2: Deliberate Processing and Problematic Situations
Methods
Our second question is whether deliberate processing is restricted to situations that respondents view as difficult, complex, or otherwise problematic. Our original (preregistered) plan was to assess this claim by comparing model parameters from models fit to responses that differed in difficulty and novelty to see if difficult or novel situations lowered estimates for processes that operate via automatic cognition while increasing estimates for parameters that operate via deliberate cognition. However, results from stage 1 suggest that the processes captured by the pathways S, 1 − S, BA, and 1 − BA are all controllable to some degree, which means that observing shifts in influence from one model pathway to another can tell us little about whether deliberate processing is increasing or decreasing. The preregistered analyses are therefore reported in Appendix C.
Instead, we try to detect deliberate processing using other information. Examining common correlates like response time might help identify instances of deliberate control, though results from stage 1 suggest that speed differences between controlled and automatic processing might not be large. We therefore also examine respondents’ self-reports of their thought processes during the task. Respondents were asked (1) whether they thought carefully or relied on their intuitions and first impressions (1 = “I relied exclusively on careful thinking,” 4 = “I relied about equally on careful thinking and my intuition,” 7 = “I relied exclusively on my intuition”; reverse coded), (2) how often they had mixed feelings or conflicting thoughts about how to act (1 = “never,” 5 = “all of the time”), and (3) how often they had to stop and think about whether to keep or donate the money (1 = “never,” 5 = “all of the time”). Higher values on these measures were taken as evidence of more deliberate processing.
We operationalize problematic situations in two ways. The first compares respondents who scored in the first and fourth quartiles of a task difficulty measure, with higher difficulty assumed to represent a more problematic situation. The task difficulty measure is the average of responses to the following two questions: (1) “How often did you find it easy/difficult to decide between keeping or donating the money during the task? Decisions were . . . [1 = ‘Always easy’ . . . 4 = ‘Easy about half the time, hard about half the time’ . . . 7 = ‘Always hard’]” and (2) “How often did you find it simple/complicated to decide between keeping or donating the money during the task? Decisions were . . . [1 = ‘Always simple’ . . . 4 = ‘Simple about half the time, complicated about half the time’ . . . 7 = ‘Always complicated’].” The scale reliability was α = 0.88.
The second way we operationalize a problematic situation is by comparing respondents from the four ignore conditions to those in the main condition. The rationale is that having to selectively disregard information is more challenging than responding naturally, as those in the main condition would have done.
For both tests of problematic situations, differences between groups high and low in problematicity are assessed using t-tests.
Results
We first compare those who were in the first and fourth quartiles of the task difficulty/complexity measure. Results are shown in Table 3. Respondents in the fourth quartile were more likely than those in the first quartile to report having mixed feelings or conflicting thoughts and having to stop and think about decisions. This suggests an increased reliance on deliberate processing. However, this extra effort did not seem to greatly increase response times which do not differ significantly from those in the first quartile. Furthermore, both groups reported relying slightly more on careful thinking than on intuition (means > 4), and the difference between the groups on this measure is not significant. These results suggest that deliberate processing increases when situations are more problematic, as many scholars have argued. However, the fact that even those in the first quartile reported relying on careful thinking indicates that respondents engaged in some level of deliberate processing even when the task was not perceived as challenging.
Mean Values for Response Times and Subjective Experiences, by Quartiles of Task Difficulty (Main Sample).
It is also worth noting that no one in our sample found the donation task extremely difficult or complex. The average rating on the task difficulty/complexity measure was 1.16 among those in the first quartile of that measure and 4.10 among those in the fourth quartile. Although this seems like a large difference, a rating of 4 corresponds to saying that the task was easy or simple about half the time, and hard or complicated about half the time. Very few respondents rated the task as being hard or complicated more than half the time. This suggests that intentional decision making occurred in situations that were only experienced as mildly or moderately problematic. Thus, although the use of deliberate processing seemed to increase as levels of problematicity increased, the level of difficulty or complexity needed to evoke deliberate cognition was not great.
We next compare respondents from the main condition to those in the four ignore conditions. Ignoring information requires deliberative processing to selectively attend to and integrate inputs during the decision-making process; thus we can expect that the demands imposed by task instructions evoked deliberate processing. Respondents in the main condition had no such instructions, and therefore faced a comparatively nonproblematic situation. Do those in the main condition show less evidence of deliberate processing?
Results are shown in Table 4. The predominant pattern is that those in the main condition do not differ from those in the ignore conditions. There are a few exceptions. Those asked to ignore modifications or task information reported relying less on careful thinking than those in the main condition, suggesting that respondents who were left to complete the task without special instructions thought more carefully than those required to ignore information. The other exception is that those asked to ignore task information reported having mixed feelings or conflicting thoughts somewhat more often than those in the main sample. Another striking pattern is that estimates are almost invariably on the low end of their respective scales. Respondents across conditions reported that they relied about equally on careful thinking and intuition (scale point 4), or else somewhat more on careful thinking (scale point 5). Similarly, respondents reported rarely (2) to occasionally (3) having mixed feelings or conflicting thoughts about how to act, and rarely (2) to occasionally (3) having to stop and think about whether to keep or donate the money during the donation task. 11
Mean Values for Response Times and Subjective Experiences, by Condition.
Respondents in the main condition took an average of 3.79 minutes to complete the task. Average response times were lower in the four ignore conditions, though these differences were generally not significant, except for those asked to ignore the modification amounts (y-ignmod = 3.30 minutes; test of difference: t[1] = 2.90, p = .004). Respondents in the main condition thus did not respond more quickly than those asked to ignore information. Row 2 in Table 4 shows that respondents in the four ignore conditions reported somewhat higher levels of difficulty than those in the main condition, but these differences were not significant. More important, average difficulty estimates in all conditions are below three, which puts them in the lower end of the response scale (range = 1–7, midpoint = 4). Taken together, these results give little reason to suppose that respondents in the main condition were relying on faster, more efficient cognitive processes than those explicitly tasked with controlling their decision processes.
The overarching pattern is that those in the main condition generally did not differ from those in the ignore conditions in how they experienced and engaged with the task. This indicates that respondents in the main condition voluntarily engaged deliberate thought processes even though situational demands for deliberate processing were low. Furthermore, the deliberate processing engaged in by participants differs from the slow, effortful deliberation often depicted in automaticity dominant accounts. Although respondents reported that they thought carefully about half the time, they also found the task easy enough that they rarely had mixed feelings or had to stop and think about what to do. This could be because, as stage 1 indicates, respondents did not find deliberate processing to be prohibitively slow or difficult.
Overall, the picture that emerges from stage 2 is that deliberate processing is positively associated with how problematic a situation is but is not restricted to highly difficult or complex situations, nor to situations that strongly cue deliberate cognition. Although we cannot say that the donation task was entirely nonproblematic for respondents—and hence that deliberate processing occurs in the absence of problematicity—it seems evident that the level of problematicity was generally low. Instead, we conclude that deliberate cognition can be cued by weakly problematic situations, that is, situations that contain some novel and/or mildly challenging elements (such as novel but easy decision task). Weakly problematic situations benefit from some deliberate decision making and control, but do not require deep or prolonged thought.
Stage 3: Estimating the Prevalence of Deliberate Processing
Methods
Our final research question is how often people deliberately control their behavior. Stages 1 and 2 suggest deliberate control of behavior might be common given that most cognitive processes are controllable to some degree, operate quickly and efficiently, and can be deployed even when situations are only weakly problematic. Perhaps because of this, respondents in the main condition seemed to make use of deliberate cognition even though they were not explicitly instructed to do so.
In stage 3, we take the additional step of calculating total influence probabilities, which are the total probabilities of relying on each process across all trials in the model, given the conditional nature of the relationships in the processing tree (Miles et al. 2023). Total influence probabilities are only calculated for the terminus branches on a processing tree. In our case, that means the primary comparison will be between the BA and 1 − BA pathways, which are likely controllable, and the E and 1 − E pathways for which controllability could not be determined. Uncertainty about whether or not the processes underlying E and 1 − E are controllable means that we will not be able to provide a direct estimate of how often respondents relied on deliberate processing. We can, however, place a lower bound on the likelihood of deliberate processing by determining how often respondents relied on the BA and 1 − BA pathways.
Results
Total influence probabilities are shown in Table 5. 12 We see that respondents generally decided whether to keep or donate money on the basis of criteria other than the baseline amount or the modification (Etotal and [1 − E]total). When modifications were small, these criteria accounted for 0.27 + 0.44 = 0.71 or 71 percent of the decisions. When modifications were large, this dropped to 0.20 + 0.35 = 0.55. Of the task features, respondents based their decisions much more frequently on the modification amounts ([1 − BA]total and [1 − BA23_bigmod]total) than the baseline amounts (BA and BA23_bigmod, total). When modifications were small, modifications guided 27 percent of decisions compared with 3 percent of decisions that were shaped by the baseline amount. When modifications were large, they guided 42 percent versus 4 percent of decisions when the baseline amount was $1 and 46 percent (vs. 0 percent) when the baseline amount was $2 or $3.
Total Influence for the Small and Large Modification Models.
These results do not support the claim that deliberate processing is uncommon. 29 percent to 46 percent of decisions occurred through pathways that include BA and 1 − BA, both of which represent processes that are controllable. These percentages represent a lower bound on deliberate processing. Deliberate processing could be higher if some of the processes captured by E and 1 − E are also controllable. Additionally, S and 1 − S govern engagement with the task and hence act as gateways to BA, 1 − BA, E, and 1 − E. Stage 1 indicated that the processes captured by S and 1 − S are controllable to some extent, which suggests that every pathway through the processing tree might be subject to deliberate control. The cumulative evidence thus suggests that deliberate processing shaped anywhere from a substantial minority of donation decisions to all of them. This estimate is admittedly imprecise but is sufficient to suggest that deliberate processing is relatively common.
Discussion
Sociologists and the public alike often believe that automatic cognition is a powerful, often decisive influence on behavior, but this idea has not gone unchallenged. Although most scholars accept that automatic cognition plays an important role in behavior, critics have questioned whether automatic thought is as dominant, and deliberate cognition as rare, as is often assumed. We tested the dominance of automatic cognition by examining whether deliberate control of behavior is rare, first by examining the theoretical assumptions that support this claim, and then building on those results to estimate how often respondents controlled their behavior.
We found that respondents made decisions using controllable processes roughly a third of the time, and possibly more frequently. Furthermore, respondents chose to exert this control even when not required to do so by explicit task instructions. Possibly, this is because the donation task constituted what we have called a weakly problematic situation, a situation with some novel or mildly challenging elements that therefore benefits from some level of deliberate engagement. Deliberate processing increased as situations were seen as more problematic, as much prior work predicts, but its use was not restricted to the most challenging contexts. A plausible reason for this is that deliberate processes were simply not that challenging to use. In contrast to the slow, inefficient deliberation often depicted in the literature, we found that controlled processing could execute both rapidly and efficiently.
Our results are difficult to reconcile with an automaticity dominance approach and instead favor a more “deliberation friendly” understanding of cognition. Prior work has demonstrated that people employ deliberate cognition across various domains, often alternating with automatically activated cognitions or when the integration of multiple inputs is required (Cerulo 2018; Leschziner and Brett 2019; Vila-Henninger 2015, 2021a). These findings sit uneasily with models that depict deliberate processing as slow and effortful but make more sense if deliberation can operate quickly and efficiently, as our work indicates. Our work also identifies when deliberate cognition will operate quickly and efficiently: when situations are weakly problematic (cf. Luft 2020). This, too, is in keeping with prior work, which generally has described situations of this sort. Consider the challenges posed by modifying a culinary dish (Leschziner and Green 2013) or doing improvisational comedy (Brett 2022). Although many might find cooking and improvisation daunting, chefs and improv comedians (respectively) can draw on a broad base of (likely automatized) knowledge and skills that frees up their deliberate cognition to focus on the novel aspects of their work, a task which for each involves integrating familiar inputs to create something new. Thus, our study joins prior work in suggesting that levels of deliberate processing will be tied both to prior socialization and the requirements of the immediate situation, but indicates that the “sweet spot” for fast and efficient deliberation is when automatically deployable skills and knowledge encounter a situation where they are almost—but not quite—able to select an appropriate behavior on their own. In such cases, a modest level of deliberate processing will be required. 13
Although our findings suggest a larger role for deliberate processing in human behavior and hence favor deliberation-friendly perspectives, they do not provide support for any specific model of cognition. The simple reason is that the cognitive models proposed by sociologists address the cognitive mechanisms that support behavior more than the relative prevalence of automatic and deliberate influence. This means that each model can be cast in a way that allows for greater influence by deliberate processes. For instance, consider the default-interventionist framework that is often (implicitly) adopted by proponents of automaticity dominance. Nothing in our findings suggests that automatic processes are not first on the scene, calling up situationally relevant cultural understandings, motor-schematic competencies, and (possibly) associations or concepts that a person does not intend to consider, such as stereotypes (Baumeister and Bargh 2014; Cameron, Brown-Iannuzzi, and Payne 2012; Rivers and Sherman 2018). They are also entirely consistent with the idea that deliberate cognition only activates in the face of a problematic context. What our results change is how often we expect people to rely on defaults relative to interventions. Although deliberate processes are undoubtedly slower and less efficient than automatic processes—indeed, it is difficult to imagine something faster and easier than automatic—our results suggest that these deficits need not be prohibitively large. Consequently, respondents need not reserve deliberate control for situations that are unusually problematic but can employ it whenever a situation arises that could benefit from even a low level of deliberate engagement, that is, in weakly problematic situations (cf. Leschziner and Brett 2019; Vila-Henninger 2015). This implies that deliberate interventions will be more common than default-interventionist accounts often suggest but does not necessarily violate the underlying default-intervention structure.
Although our results are not useful for adjudicating between competing cognitive models, they do suggest that we might need to change how we think about the relationship between cognitive processes and action. Work to date generally focuses on the distinction between automatic and deliberate cognition as a primary mechanism for explaining action, leading to questions like what sorts of actions each type of cognition supports, how much influence each type has relative to the other, and so on. Our findings suggest that instead of types of processes, we should instead consider characteristics of processes. In this view, most behavior is supported by processes that are fast and efficient, regardless of whether they are automatic or deliberate. Furthermore, people do not necessarily rely exclusively on the fastest or most efficient processes, but rather on processes that are fast and efficient enough. It is worth noting that this does not fundamentally change the logic that proponents of automatic influence already rely on: fast and efficient processes have a practical advantage, so they are most likely to guide behavior. The only real difference is that it acknowledges that these characteristics are not necessarily restricted to automatic cognition.
An important task moving forward will empirically determine whether our findings hold in different contexts and across different types of behavior. To date, sociologists have shown evidence of deliberate processing in a wide range of behaviors including cooking (Leschziner and Green 2013), moral decision making (Vila-Henninger 2021b), interpreting scents (Cerulo 2018), performing improvisational comedy (Brett 2022), expressing prejudice (Saito et al. 2024), and donating money (this study). However, much of the existing work infers deliberate processing from behavior, talk, or other indirect cues. In this study we attempted a more direct test of deliberate processing but were forced to rely on self-reports of cognitive processing after preregistered analysis plans proved noninformative. Thus, future work should determine whether our main findings hold for behaviors that are both socially consequential and widely assumed to be heavily shaped by automatic processing, such as discrimination, consumer purchasing, and health-related practices. Ideally, this work will use direct, preregistered tests of deliberate influence on behavior. We also encourage researchers to simultaneously examine multiple characteristics of the cognitive processes they study. This will make it possible to test our hypothesis that it is fast and efficient processes (vs. automatic processes per se) that guide behavior.
Much of the existing work on cognition, including this study, implicitly advances a one-size-fits-all model that only allows variation in cognitive processing based on features of the environment. However, several scholars have pointed out that preferences for using deliberate cognition vary in systematic ways across individuals (Brett and Miles 2021; Epstein et al. 1996; Leschziner and Brett 2019), and supplemental analyses of our data reported in Appendix D support this claim. Individual variation in cognitive processing might influence how tightly deliberate processing is bound to the perceived problematicity of situations, and how well practiced respondents are at deliberate processing, which could directly affect how quickly and efficiently it operates. This implies that the automaticity dominance and more deliberation-friendly perspectives might represent two ends on a continuum that captures individual variation in how much influence deliberate cognition has on behavior. Placement along this continuum could be tied to biological differences in cognitive capacities, developmental stage (e.g., with children exercising less deliberate control), and demographically patterned differences in life experiences (e.g., education; Brett and Miles 2021).
Relatedly, we used data from a nonrepresentative online panel of U.S. residents. Although the data do not include demographic measures, online samples tend to skew younger, female, and more educated (e.g., Litman and Robinson 2021). Prior work suggests that the college graduates are more prone to use deliberate cognition, while women and (possibly) older individuals rely less on deliberate cognition (Brett and Miles 2021). It is therefore unclear how well our sample represents cognitive processing in the United States population, and caution leads us to assume that we somewhat overestimate the use of deliberate cognition. Future work would benefit from examining the use of deliberate processing across a wider variety of people. Representative samples will be helpful here, but so will samples that target variation in the sorts of individual characteristics related to deliberate processing.
We hesitate to make strong claims about the practical implications of our findings considering that the remaining gaps that must be filled to delineate the full scope of deliberate influence on behavior. That said, we tentatively hypothesize that deliberate influence on behavior will be most likely when there is a demand for deliberate control and countervailing influences are minimal. This should occur when people are in weakly problematic situations that provide sufficient allowances for deliberation, such as adequate time and a lack of cues that activate strong automatic inclinations that could heavily influence or even override deliberate processes. How frequently such situations occur is likely to vary across behavioral domains, across individuals who are socialized in different ways, and across positions in the social structure. Yet we suspect that these features characterize many of the situations people commonly encounter in their daily lives, particularly when they are living in stable, familiar circumstances. This suggests that the influence of deliberate cognition on behavior could be quite broad, and that some theories and practices that take strong automatic influence as a premise might need to be revised. 14
As an example, companies invest considerable resources in diversity-related initiatives, with annual costs expected to exceed $24 billion by 2030 (StrategyR 2021). Popular among such initiatives are implicit bias trainings that attempt to reduce the automatic thoughts and attitudes that are assumed to give rise to discriminatory behaviors. Although we do not deny that automatically activated biases exist nor that they can sometimes influence behavior, a stronger model of deliberate control suggests that their influence on actual practice might not be as great as is often assumed. We reason that many people are motivated to avoid discrimination (for both personal and social reasons), which means that encountering a member of a disadvantaged group is likely to qualify as a weakly problematic situation—“problematic” because there is a risk of violating norms around nondiscrimination, but “weak” because they generally will have the know-how and skills to navigate the interaction successfully. In such conditions, people should be able to quickly and efficiently apply deliberate processes to monitor the situation and make small behavioral adjustments as needed. In short, many people might already be fairly adept at managing their negative automatic reactions. This could be part of the reason why implicit bias training and related diversity initiatives rarely change diversity-related outcomes in organizations (Devine and Ash 2022; Dobbin and Kalev 2018) and why meta-analyses reveal weak to nonexistent effects of implicit processes on behavior (e.g., Forscher et al. 2019). This in turn suggests that companies could invest their resources in initiatives that might be more effective than implicit bias trainings. In a similar vein, a stronger role for deliberate control of behavior could suggest a need to reconsider policies aimed at redressing inequities in hiring, peer review, and so forth that are premised on the notion of powerful implicit effects (e.g., AAMC 2023; Onken et al. 2021).
The idea that automatic processes are significantly more influential than deliberate processes in shaping behavior is widespread among both scholars and the public. Building on recent critiques, we have shown that this claim might be too strong. Instead of the slow, laborious processing often depicted in past work, we found that deliberate cognition could be fast and efficient, allowing it to make regular contributions to donation behavior. To return to the common metaphor, our work suggests that rather than a hapless rider and an unwieldy elephant, our minds might often be more akin to a responsive guide astride a reasonably well-trained and compliant mount. This, in turn, invites scrutiny of the theories and real-world practices that presuppose strong automatic influence on behavior.
Supplemental Material
sj-docx-1-srd-10.1177_23780231251325087 – Supplemental material for Is Deliberate Control of Behavior Rare?: A Test of the Automaticity Dominance Perspective
Supplemental material, sj-docx-1-srd-10.1177_23780231251325087 for Is Deliberate Control of Behavior Rare?: A Test of the Automaticity Dominance Perspective by Andrew Miles, Salwa Khan and Yagana Samim in Socius
Supplemental Material
sj-docx-2-srd-10.1177_23780231251325087 – Supplemental material for Is Deliberate Control of Behavior Rare?: A Test of the Automaticity Dominance Perspective
Supplemental material, sj-docx-2-srd-10.1177_23780231251325087 for Is Deliberate Control of Behavior Rare?: A Test of the Automaticity Dominance Perspective by Andrew Miles, Salwa Khan and Yagana Samim in Socius
Footnotes
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by grant 435200137 from the Social Sciences and Humanities Research Council of Canada.
Supplemental Material
Supplemental material for this article is available online.
Notes
Author Biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
