Abstract
Ethical decision-making is inherent to the research ethics committee (REC) deliberation process. While ethical codes, regulations, and research standards are indispensable in guiding this process, decision-making is nonetheless susceptible to nonrational factors that can undermined the quality, consistency, and perceived fairness REC decisions. In this paper I identify biases and heuristics (i.e., nonrational factors) that are known to influence the reasoning processes among the general population and various professions alike. I suggest that such factors will inevitably arise within the REC review process. To help mitigate this potential, I propose an interventive questioning process that can be used by RECs to identify and minimize the influence of the nonrational factors most likely to impact REC judgment and decision-making.
Keywords
The effectiveness and value of research ethics committees (RECs) unquestionably rests on the quality of the ethical review rendered. Although REC review often involves close scrutiny of risk/benefit considerations, there are, of course, numerous additonal ethical decisions to address: Are data security concerns effectively managed? Are there limitations to the capacity to consent? Is expedited review or full review required? Are participants being appropriately compensated? And so forth. Many such decisions are easily made, with some requiring little more than ensuring compliance with nonnegotiable standards, bereft of concentrated cognitive effort. Many other decisions, however, involve varying degrees of ethical deliberation and interpretation, instances that are often punctuated in REC meetings by fulsome discussion and debate. Inevitably, flawed reasoning processes infiltrate such sites of deliberation; in this paper I delineate their influence specifically in the context of ethical review and outline ways in which REC’s can take steps to minimize their impact.
Quality concerns
Despite being a fixture of scientific research for almost 50 years, evidence suggests that the effectiveness, and hence reputation, of RECs has been marred by quality concerns, defined broadly as inconsistent decisions and inefficient review processes (Abbott and Grady, 2011; Lynch et al., 2019; Nicholls et al., 2015). Concerns regarding consistency appear to be particularly vexing, with numerous multisite studies demonstrating disparate decisional outcomes, some of which appear to be wholly contradictory (e.g. Helfand et al., 2009; Pritchard, 2011; Stair et al., 2001; Stark et al., 2010). Assessment of risk which is, of course, paramount to determining level of review, is concerningly unreliable. This was the finding in a mixed method study by Green et al. (2006) who distributed an ethics application to 43 cites that was purposely designed to qualify for expedited review. Of the 43 cites, “one site exempted it from review (although it did not qualify for exemption), 10 granted expedited review, 31 required full review, and one rejected it as being too risky to be permitted” (p. 243). Comparable results in the domain of risk/benefit assessment have also been observed in similar studies (e.g. Shah et al., 2004; Van Luijn et al., 2002). While some degree of variability is inevitable (Stark, 2007), and may point to the influence of geographical or cultural variations that reveal justice considerations (Edwards et al., 2004), efforts to improve REC effectiveness nonetheless remain salient.
Various reasons have been offered to account for noted shortcomings with REC effectiveness, including poor training, lack of diversity among REC membership, limited resources, high workloads, and general differences in moral predilection (Pritchard, 2011). In some instances, questionable REC reviews have been attributed to what has been called “mission creep,” which is the tendency to overextend the REC role beyond the boundaries of its purported mandate (Cook et al., 2013; Haggerty, 2004; Mueller, 2007). In addition to mission creep, Fitzgerald (2005) contends that sensational or controversial ethical events at the macrolevel can incite a form of moral panic which filters down to microlevel deliberation processes and decisions within RECs. Fitzgerald also draws attention to microlevel group processes derived from her direct observations that led to temporal irregularities during REC meetings where some applications received “slow and deliberate scrutiny” whereas others were reviewed “with great swiftness” (p. 330). For example, she noticed that the pace of ethical review during meetings was often rushed just prior to lunch, with the converse being true following lunch. These, along with similar examples, serve to negatively influence the quality of review during REC meetings, ostensibly contributing to inconsistency derived from factors that having nothing to do with ethical analysis.
Regardless of the reason behind REC deficiencies, certain costs will predictably be borne by the various parties of the research endeavour (Abbott and Grady, 2011). For researchers, inefficient processes and dubious decisions can lead to delayed research, cost overruns, loss of funds, and in some instances a disinclination to follow through with research. For research participants, poor REC decisions can lead to increased exposure to risk, actual harm (physical, dignitary, economic, legal) or lost opportunity to participate in, and benefit from, research. Research ethics committees, themselves, may be unduly burdened by inefficiencies that lead to protracted review processes and the ignominious fueled stress of knowing that researchers view them as adversary rather than ally. In light of the noted concerns, there are ample reasons for RECs to want to improve the quality of their ethical deliberation.
The previously stated compendium of possible REC concerns and their outcomes has spawned a number of attempted remedies, some of which are instrumental in nature (increased training/resources) and some of which point to larger governance intervention (upstream policy change, etc.). I turn now to other sources of variability, sources that are psychological in nature and universal in scope (Pritchard, 2011). These I, along with others (e.g. Rogerson et al., 2011) refer to as “nonrational factors.” It should be noted, however, that this nouns phrase is used as convenient shorthand for psychological processes that universally influence judgment and decision-making (Kahneman, 2013), and certainly not as a pejorative dichotomy. Rather, as will be elaborated upon later, nonrational outcomes are those which (1) depart from logical and deliberate reasoning, and (2) are characteristically produced by undetected or unexamined intuitive processes. The role of nonrational factors specifically, and psychological processes and characteristics generally, has received little empirical investigation within REC scholarship (Anderson and DuBois, 2012; Pritchard, 2011). Klitzman did, however, incidentally discover within his qualitative study of variation among RECs that some of his 46 interviewees based their decisions on “gut feelings” and “intuition” and cited inadequate training as their reason for doing so (p. 9). Similarly, qualitative research by Van Luijn et al. (2002) that examined risk/benefit assessment among members of Dutch academic hospital RECs intimated the presence of intuitive reason, citing that a significant proportion of interviewees felt ill-equipped to assess the probably and degree of risks and benefits so instead relied upon “overall impressions” (p. 1310). Though meagre, these studies do add empirical credence to the probability that psychological factors, including what I refer to as nonrational factors, impinge upon the deliberation processes inherent to REC work. In what follows I draw attention to select nonrational factors that can intrude upon and influence the types of decisions rendered in research ethics review.
Cognitive processes
When making judgments and decisions, uncontaminated reason routinely escapes us (Kahneman, 2013). Most every instance of cognitive effort is influenced to some degree by intuitive processes that influence our reasoning outside of conscious awareness. While research is scant, it can be assumed that REC members are susceptible to the same types of systematic errors 1 found within the general population (Kahneman, 2013) and other professions, including medicine (Saposnik et al., 2016) law (Cooper and Meterko, 2019), and management (Acciarini et al., 2020). With no recourse to entirely eliminate such furtive processes, we are enjoined in the sphere of REC work to minimize their influence. To this end, I turn now to an explication of biases and heuristics that have a probable impact on REC judgment and decisions making.
Framing effects
Nobel prize-winners Kahneman and Tversky were first to draw attention to how the way a statement is framed influences perceptions of risk. In their classic research study Tversky and Kahneman (1981) demonstrated that people’s responses to the same risk preference scenario varied significantly depending on how the scenario was described. The question these researchers posed to participants when investigating framing effects was this: Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which one of the two programs would you favor? (p. 702)
What Kahneman and Tversky found was that presented with this scenario most participants chose Program A, thus evidencing a preference to save lives and avert loss when it is presented as a sure thing. However, when wording of the scenario was altered to say this: If Program A is adopted, 400 people will die. If Program B is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.
Participants in this instance were more likely to choose Program B, evidencing a willingness to assume risk. Other than the language used, the outcomes in Kahneman and Tversky two scenarios are materially equivalent. The instructive finding here is that when the salient feature of an outcome is presented as loss, people are more likely to avoid risk, with the opposite holding true with gains. Given the work of RECs, framing effects can potentially occur any time projected loss or gains become the focal point of discussion. Consider, for example, the following responses to the question of a potential novel therapeutic approach:
It is estimated that 35% of participants will benefit from the therapeutic condition.
It is estimated that 65% of participants will not benefit from the therapeutic condition.
If this were an instance where the novel therapeutic approach was associated with some discomfort, and if Kahneman and Tversky framing effect were to hold true, then REC members would be more likely to ascribe an acceptable level of risk to the first description over the second.
Heuristics
The word heuristic is derived from the Greek word heuriskein, which means to discover. In modern times the word is used to convey an efficient and shortened path to knowing, which though eminently pragmatic and often accurate can nonetheless lead people to ignore or modify information that might otherwise contribute to better judgment. In common parlance we refer to these heuristics as biases because they influence our thinking in unintended, prejudicial, and unnoticed ways. All humans are equally susceptible to such biases, and while they spare no one, this is not as dire as it might seem. In many contexts shorthand cognitive processes serve us quite well, helping to parse extraneous information in ways that advance the efficiency and accuracy of our judgments. There are, however, times when “fast and frugal” cognitive processes (Todd and Gigerenzer, 2007: 169) need to be restrained, giving way, as it were, to slower, more deliberate and nuanced cognitive operations.
Availability heuristic
Availability stands as the cardinal feature of many cognitive heuristics and rests on the axiom that some thoughts come to mind much more easily than others. The availability heuristic is emblematic of this finding. According to Kahneman (2003) the availability heuristic leads individuals to overestimate the frequency of an event based on how effortlessly it comes to mind: the more readily an event comes to mind, the more likely it will be viewed as a common occurrence. Certain conditions tend to amplify this tendency, such as when an event is tied to personal experience, when it is especially striking and salient, or when it is recent (Kahneman, 2013). For example, if an extended family member were to slip on ice and break a bone, one might then overestimate the general number of people this happens to in a given year (personal experience). A person who witnesses a drive by shooting is more likely to think it is a common occurrence that one who reads about a shooting in the newspaper (salience); and if this witnessing of a shooting occurred yesterday versus 3 years ago, estimates of frequency will again be larger (recency). In all instances the actual statistical probability of the event remains the same. All that has changed is the ease by which the mental representation of the event comes to mind. This effect is so robust that even an easily imagined event that one has never experienced can lead to overestimations of frequency. For example, it is easier to imagine dying in a plane crash than falling down the stairs although the probability of the latter is far greater. This, of course, can lead one to both over and underestimate the degree of risk.
Examples of how the availability heuristic can affect REC decision-making processes include:
• If a REC member’s family member experienced a severe, yet uncommon, side-effect from a particular medication, that reviewer might overestimate the frequency of this response across the general population.
• If a REC member witnessed a mugging in a park, the REC reviewer might view observational research that occurs in a park as too risky even though such assaults are statically very low in the identified park.
• If a REC member witnessed a mugging last week, then perceptions of the frequency of such incidents would be higher than if the attack was witnessed 5 years previous.
Anchoring heuristic
An anchoring effect occurs when initial information influences how subsequent judgments are made. In the words of Kahneman and Tversky (1974), “different starting points yield different estimates, which are biased toward the initial values (p. 1128). For example, if you were to come upon a funky used film camera and someone asked you if it should be priced more or less than $100, and then later someone asked you to estimate the price of the camera, you would likely guess about $100. This is because your subsequent estimate is anchored to the initial value that was presented. A critical element of this heuristic is the inclination to disregard new evidence that should serve to alter initial impressions. The anchoring heuristic can manifest in manifold ways within the research review process. For example, when considering monetary incentives, if a researcher initially indicated that participants would be paid $1000 to participate in high risk biomedical research, the REC might deem this too high and accept $750 not due to the reasonableness of the second amount, but because it was anchored to the initial amount.
Framing effects and heuristics can both influence how REC’s assess risks and benefits associated with research. Affect, although often neglected as an influencing factor in decision-making, also, assumes a critical role.
Affective processes
Despite cognition’s historical dominance in the domain of information processing systems, evidence indicates that affective processes also significantly impact how people form judgments and make decisions, and as with cognitive processes, often without our awareness. A novel and useful framework for this comes from conceptual work by Peters et al. (2006). These authors posit that affect serves a heuristic function in decisional processes in four ways: affect as information, as a spotlight, as a motivator, and as common currency. Each of these can aptly be linked to the forms of decision-making involved in the ethical review of research.
Affect as information
Affect as information is the tendency to unconsciously append emotional values to things, events, and people in our lives. This process is informational insofar that feelings connote a message which then informs a judgment that arrives prior to, or in substitution for, deliberate cognitive processing. The affect as information heuristic is revealed by the subliminal tendency to interchangeably use the words “think” and “feel.” When someone says “I feel that people are increasingly dishonest these days” the manifest sentiment associated with this statement more accurately resembles a thought than a feeling. Yet, this could indicate the presence of an implicit, or unconscious emotional state that is actually influencing, or responsible for, the declared judgment. In this instance, something said or something that happened prior to this utterance could have been paired with a negative emotional state, which then casts a negative hue on the rendered judgment. The resulting decision might more accurately be viewed as what one feels about the situation than what one thinks about it, with rational thought being effectively overridden by a pre-reflective emotional evocation. Importantly, research indicates that favourable feelings lead to judging risks as low and benefits as high, with just the opposite being true with negative feelings (Alhakami and Slovic, 1994). This, of course can directly impact the decisions made by REC reviewers. For example, if a REC reviewer harboured negative feelings toward alcohol use, this person would likely assess research into the effects of alcohol use at a higher risk level than a reviewer who did not harbour such negative feelings.
Affect as a spotlight
With many potential points of attentional focus, human emotion tends to hone toward the source of an evocative feeling. Attention, in this sense, is affectively selective: In the realm of competing points of interest, we tend to be drawn to that which evokes strong emotion (Nabi, 2003). With the “spotlight” cast upon the emotionally charged focal piece, other less evocative affective content recedes into the background. As Peters et al. (2006) explain, “Depending upon how people feel about an object, they may focus on different information. The function of affect as a spotlight takes advantage of the role of feelings in directing cognitions to address the source of the feeling” (p. 147). When this occurs during decision-making, we might only see what is rendered accessible and readily discernable by strong emotion, thus overlooking other potentially important sources of decision-relevant information. For example, if a REC member were to review an application in which the research featured the use of graphic images to discourage smoking, the emotive response associated with the graphic image could make it less likely that other significant features of the ethics application would be afforded necessary discernment.
Affect as a motivator
Lay wisdom holds that action is often preceded by strong emotion or, said another way, that heightened arousal jumpstarts behavioural action. Cognitive science lends empirical support to this credo (Custers and Aarts, 2005; Gable and Harmon-Jones, 2010; Hepler and Albarracin, 2014). The crux of this influence lies in the way that basic binary emotional judgments (good vs. bad) become entangled with our motivation to avoid or approach stimuli (Chen and Bargh, 1999). For example, research indicates that fear acts as a powerful motivator for taking active steps to avert a serious medical outcome (e.g. accessing mammography screening; Rimer et al., 2002). However, positive affect can also compel the degree of effort extended when engaged in decision-making (Cunningham, 1988; Fredrickson and Branigan, 2005). Research by Fredrickson and Branigan (2005) found that college students who viewed films that elicited positive emotions such as contentment and amusement subsequently displayed a broader scope of attention and increased thought-action impulses (i.e., the impulse to think about things one would like to do) compared to neutral control condition. Not only can positive affect motivate increased scrutiny, evidence suggests that it can do so without conscious awareness. Research by Hart and Gable (2013) examined the effects of positive affect and motivational intensity on goal pursuits and found that positive affect, in tandem with high motivational intensity, lead to increased goal pursuit by study participants. When asked later, however, participants were unable to report on how the affect induction had influenced their motivation (p. 925). A similar finding was reported by Custers and Aarts (2005). In the arena of REC review, findings from the aforementioned research suggests that REC reviewers who hold a favourable attitude toward a particular piece of research will be more motivated to engage a deliberate and nuanced decisional process than one who holds a more negatively weighted attitude.
Affect as common currency
Thoughts, according to Peters et al. (2006), are inherently more complex than feelings which, as noted earlier, often assume a good/bad binary form. The simplicity of this form makes it possible to compare entities whose general qualities belie comparison. In some ways this can be very beneficial, as it serves as a convenient and expeditious means to reduce high cognitive load comparisons into more manageable affective states. In the REC context, using affect as a common currency for deliberation might be evidenced if REC members began engaging in debate characterized by general evaluative affective states, foregoing a more detailed examination of specific and perhaps complex factors that should be inspected. Peters et al.’s (2006) explanation of this tendencies is instructive: “By translating more complex thoughts into simpler affective evaluations, decision makers can compare and integrate good and bad feelings rather than attempt to make sense out of a multitude of conflicting logical reasons” (p. 149–150). This reductive maneuver, of course, comes with an analytic cost, as important details and associated complexity are lost when emotions unwittingly become the focal point for deliberation. If two reviewers were locked in an oversimplified debate, or if an entire REC were painting broad stroke approval or condemnation of a particular ethics application, this would likely signal that careful reasoning was being overridden by affect as common currency. An example derived from the author’s experience involves the use of minors as participants. If it happened during an ethics application review involving minors that the review discussion was strewn with comments such as “I feel very strongly that. . .” or “It upsets me to no end when. . .”, this could be a sign that specific scrutiny of ethical principles, tensions, and regulations had given way to deliberation based on REC members’ emotional reactions to the subject matter.
Addressing nonrational factors
I have argued that nonrational factors inevitably impact judgment and decision-making among those who partake in research ethics review. While there is no shortage of empirical evidence and scholarly output to support the presence and influence of such factors, the corresponding literature that examines how best to address them is much less impressive. The term “debiasing” came to the fore in the early 1990s to denote the importance of not only identifying myriad biases but of also minimizing their impact. Despite its 30-year history, debiasing research remains underdeveloped and hence far from conclusive. Most of what exists focusses on diagnostic errors among medical students (See Griffith et al., 2020 for an extensive review), with a few additional studies directed toward judicial sentencing (e.g. Lidén et al., 2019; Stein and Drouin, 2018).
In their scoping review of debiasing interventions among medical students, Griffith et al. (2020) identified four primary debiasing strategies: gaining increased medical knowledge or experience (seven studies); guided reflection (eight studies); self-explanation of reasoning (nine studies); and checklists to increase diagnosis considerations (seven studies). Of these four, only guided reflection showed notable success. In a typical guided reflection intervention medical students are tasked with responding to a structured reflective process intended to help ensure alternative diagnoses have been considered, evidence has been gathered, and so forth. Griffith et al.’s finding lends support to the contention that directing decision-makers to systematically consider alternatives can decrease certain types of diagnostic errors. The function and intent of such corrective interventions among medical students, however, has been called into question by Norman et al. (2017) who aptly point out that experiential knowledge and education play an important role in minimizing error, irrespective of biases that may come into play. These authors further note that the majority or research on debiasing among medical students does not actually target cognitive biases as the source of diagnostic error.
The current research on debiasing indicates a modest degree of empirical support for guided reflection and is largely based on improving the diagnostic prowess of medical students. Unfortunately, this serves as a poor analogue to the types of biases that may occur among REC reviewers when assessing risk/benefit ratios and other parameters of ethical research.
A much better analogue comes from the practice of psychotherapy, namely, cognitive behavioural therapy (CBT). As the most widely known and studied psychotherapeutic approach, CBT has amassed an impressive body of efficacy research across varied populations and clinical conditions. Metanalytic reviews consistently report significant and enduring improvement among those who participate in this type of therapy (e.g. Hofmann et al., 2012; Twomey et al., 2015).
The central premise of CBT holds that negative mood stated (e.g. depression, anger, anxiety) are causally related to distorted cognitive processes which operate automatically and according to varying degrees of conscious awareness. Therapy involves helping clients detect and correct their faulty ways of thinking. The therapeutic process begins with a psychoeducational component that involves teaching the CBT model of change along with examples that illustrate the relationship between cognition and emotion. The therapist then uses conversational strategies, such as Socratic dialogue, to help clients identify specific distorted cognitions and their emotional consequences. The discovery phase occurs both in office and out in the “real world.” Once discovered, distorted cognitions are systematically replaced with more functional cognitions. The effectiveness of the CBT therapeutic process rests on the client’s willingness and ability to engage in sustained metacognitive activity. Said another way, therapeutic gains are most profitably realized when clients become proficient observers of their thinking (Wedding and Corsini, 2019).
Many of the cognitive distortions targeted in CBT resemble the cognitive processes noted in bias and heuristic research. For example, a common cognitive distortion that arises in depressed people involves a tendency to overgeneralize a single experience as representative of a general and enduring pattern (e.g. arriving at the conclusion that everyone hates you because one person snubbed you at an office party). This, of course, is similar to the availability heuristic which involves over estimating the likelihood of an event based on the availability of similar examples. CBT’s process of identifying and addressing cognitive distortions runs parallel to the interventive reflection processes suggested in this paper. In both instances they involve gaining an increased understanding of threats to rational reasoning; using a reflective conversational process to ascertain instances where compromised reasoning undermines the pursuit of desired outcomes; and revising initial cognitions (interpretations, judgments, decisions) to increase the likelihood of reaching desired outcomes.
In keeping with the CBT interventive process noted above, it is insufficient to simply bring increased awareness to the form and function of the various nonrational factors that can influence REC deliberation. Passively learning about biases and heuristics provides no assurance that individuals will avoid them or engage in a timely and accurate correction (Hart and Gable, 2020; Lambe et al., 2016). Rather, consistent with the debiasing research reviewed by Hart and Gable (2020) and the CBT intervention research discussed above, debiasing efforts are more likely to be efficacious if they include an active and ongoing reflective process. One form of active engagement involves upstream structuring interventions that help prevent errors from occurring in the first place. These are sometimes referred to as cognitive forcing strategies because they force (in benevolent ways) individuals to do something that is ostensibly advisable and beneficial. Such strategies are often promoted in medical practice as a way to compel physicians to follow protocols that decrease the likelihood of error (e.g. procedural checklist for certain conditions where misdiagnosis due to cognitive error is common) (Croskerry, 2003). While helpful in improving confidence and accuracy of judgments rendered in some medical contexts (Lambe et al., 2016), the target of cognitive forcing strategies remains concealed, thus doing little to mitigate the presence of nonrational factors across other contexts. The recipient of such forcing strategies may lack understanding of why they have been implemented and thus remain oblivious to their import. Various other approaches and reforms have been suggested and attempted to decrease decisional variability within and across RECs, including centralized review, increased education, increased collaboration between researchers and RECs, accreditation, the use of decision-making protocols, and changes in federal regulation (Candilis et al., 2006; Lynch et al., 2019; Resnik, 2017). While undoubtedly helpful to varying degrees, these strategies look past the variability that inevitably arises due to psychological factors. As remedy to this omission, the approach promoted in this paper draws attention to the reasoning processes in operation during REC deliberation.
The implicit irony of attempting to redress biases at a personal level is that yet another bias exists that confounds the corrective agenda; namely, that we are better at detecting biases in others than in ourselves (Pronin et al., 2002). This tendency, known as the blind-spot bias, summons the need to address nonrational thought processes at the individual level. Success, in this regard, involves adopting a metacognitive approach characterized by a commitment to thinking critically (in the inquisitory sense of the word) about our thinking. A starting point for this undertaking is understanding human cognition as comprised of two systems (Kahneman, 2013): System 1 characterized as quick, involuntary, and effortless, and System 2 as deliberate, effortful, and orderly. System 1 is considered intuitive and susceptible to affective influence, whereas System 2 is considered agentive and logical. Both systems remain active during waking hours, with System 1 processing and responding to experience unnoticed, sending System 2 information that most often is accepted without effortful scrutiny. It follows that most of the times we are guided by involuntary, unexamined intuitions. System 2 only becomes activated when System 1 is overtaxed, or when prompted. The two systems work in concert and in most instances serve as a reliable and efficient guide for human behaviour. It would be a mistake to assume that the intuitive System 1 is inferior to the logical System 2. Even if possible, it would be cognitively exhausting to engage System 2 processing on a continuous basis. System 1 is indispensable in helping us make sense of our moment-by-moment unfolding experience, and it goes about this work for the most part with pragmatic accuracy. Of concern, however, is the huge corpus of research 2 that shows that despite being unquestionably helpful, System 1 is also susceptible to systematic and predictable errors. It is System 2’s commission to detect and correct these, yet if System 2 does not understand System 1’s operating system and tendencies, then the detect and correct function is impaired.
Yet as has been shown through the biases and heuristics discussed in this paper, System 1 tendencies can potentially intrude upon the System 2 processes involved in REC judgements and decisions. Proactive measures directed toward fortifying System 2 processes are thus needed to help inoculate REC members against nonrational threats. The approach offered here involves (1) exposing biases and heuristics that can influence judgment and decision-making (the focus of the first part of this paper), and (2) enabling a process that can mitigate this influence across a broad range of contexts. To achieve the first of these, it is recommended that when orienting new members to the committee, RECs provide an overview of biases and heuristics, where they likely arise, and how they influence judgement. This would be coupled with a System 2 enabling process described as follows.
The method that I suggest involves using interventive questions to prime the proficiencies of System 2 as a means to minimize unduly biased judgment. This approach is structurally and functionally similar to the CBT intervention model which, as noted, has strong empirical support. It also resembles guided reflection, which is the only debiasing approach to date that has demonstrated reasonable efficacy (Hart and Gable, 2020; Lambe et al., 2016). Four types of questions, derived by the author, and based on the aforementioned research, comprise the basis of the proposed approach. These include initiating questions, reconceiving, questions, substantiating questions, and verifying questions.
Initiating questions are used at the outset to orient REC members to instances of ethically relevant content that are susceptible to System 1 derived influences. Examples of initiating questions include:
• Given how this ethically relevant information is presented, what type(s) of bias or heuristic will likely influence our judgment?
• Given the content of this ethically relevant information, what type(s) of bias or heuristic will likely influence our judgment?
• Given the emotional valence associated with this ethically relevant information, what sort of affective heuristic will likely influence our judgment?
Substantiating questions are used to help detect the presence of possible System 1 influences. Examples of substantiating questions include:
• What specific cues might indicate that our judgment has been influenced by a bias or heuristic and in what direction (e.g. over/underestimation)?
• What might other people, outside of the REC membership, notice that suggests the presence of a biased reasoning process?
Reconceiving questions are designed to stimulate heightened cognitive effort directed toward generating alternative understandings and explanations. Examples of reconceiving questions include:
• How might another person (e.g. a research participant, a loved one of a research participant, a researcher from a different field, etc.) view our ethical judgment?
• How might verifiable data (statistical probabilities, base rates, etc.) either support or negate our ethical judgment?
• What are three other ways that this ethically relevant information could be viewed?
• If I had little or no emotional investment in this ethically relevant information, how might this influence my judgment?
• Do what degree is my decision influenced by what I feel about it, versus what I think about it?
Finally, verifying questions involve testing new explanations and understandings against those that were initially generated and discussed. Examples of verifying questions include:
• What evidence generated from the reperceiving questions suggests my initial intuitions ought to be replaced?
• How would replacing my initial intuitions with this alternative way of understanding/explaining things influence my subsequent judgement?
The outcome of this questioning process is best not viewed as certainty’s resting point. Rather, constructive doubt becomes the remnant artifact of an effective interrogative process. Doubt allows REC members to untether from System 1’s intuitive content, freeing up the opportunity for new understandings which can then undergo verification. Verifications, should be tendered provisionally, with acknowledgement that biases may still go undetected and that decisional accuracy is always approximate.
If desired, the interventive questions offered here can be used by RECs at the beginning of a meeting as a means to target and address foreseeable biases and heuristics prior to detailed discussion of a research ethics application. The questions lend toward a sequential order, however, there is no expectation or prescription that they be used in this manner. Indeed, an interventive process like the one suggested will lose its effectiveness if viewed by participants as formulaic and contrived. It is more important that they be delivered flexibly and intentionally, altering their order, verb conjugation, and wording to fit a given REC’s needs and characteristics. Conversely, a REC could choose to draw on the interventive questions strategically when the situation warrants. In this instance, the REC chair would likely assume responsibility for initiating the questioning process, the effectiveness of which would be contingent on identifying optimal moments to intervene. This presupposes the need for a chair who is well-versed in the contents of this paper and skillful in managing group process. Optimally, use of the proposed interventive questioning becomes normative within the REC conversational process, where all members collectively work to identify and address biases and heuristics that potentially influence their deliberations. In many instances it will be unnecessary to pose all questions. Simply asking one well-timed incisive question that triggers a more rigours System 2 response may be all that is required.
To conclude, it should not be assumed that nonrational factors borne of System 1 processes are necessarily the antithesis of sound rational decision-making. Every decisional process will defacto be incomplete, as there is never solid assurance that additional relevant factors could be found and considered; similarly, every decisional process will be to some degree tainted by nonrational factors. Identifying their existence, understanding their operative means, and reducing their effects is a more realistic and profitable exercise than striving for terminal exorcism. The title of this paper betrays this commitment; this is an exercise in addressing, not eliminating nonrational factors. It is hoped that the information and ideas shared in this account will help fulfill this aim.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
All articles in Research Ethics are published as open access. There are no submission charges and no Article Processing Charges as these are fully funded by institutions through Knowledge Unlatched, resulting in no direct charge to authors. For more information about Knowledge Unlatched please see here: ![]()
