Abstract
Bartels (2023; this issue) argues that (a) classic studies and topics covered in psychological textbooks and introductory classes are often misrepresented, (b) that there is an ideological bias among scholars in psychology towards the left side of the political spectrum, and (c) this bias is responsible for the misrepresentation of studies and topics in textbooks. In our commentary, we argue that claims (a) and (b) may be correct, but they have nothing to do with each other. Thus, claim (c) – that a liberal bias among scholars and course instructors leads to “indoctrination” in introductory courses and textbooks – is unsubstantiated and actually detrimental.
Bartels (2023; this issue) argues that (a) classic studies and topics covered in psychological textbooks and introductory classes (such as the Stanford Prison Experiment, research on stereotype threat, or the Implicit Association Test) are often misrepresented, (b) that there is an ideological bias among scholars in psychology towards the left side of the political spectrum (“liberal bias”), and that (c) this bias is responsible for the misrepresentation of studies and topics in textbooks.
There is ample evidence for claim (a): Yes, many classic studies are indeed “misrepresented” in textbooks – in the sense that they are oversimplified. In a typical introductory textbook, a specific study is often described with regard to its design, its measures, its sample, and the most relevant results. Further methodological issues (regarding low reliabilities, low sample sizes / statistical power, low generalizability etc.), contradicting evidence, failed replication attempts, and scholarly debates about the conceptual interpretation of the findings are typically not mentioned. One might argue that, especially given the current debate about the replicability and generalizability of psychological findings, omitting all these details “misrepresents” the original research. Yet, simplification can sometimes be very useful.
Of course, a study in a textbook should always be accurately presented, properly contextualized, and soundly interpreted. Also, if a particular effect turns out to be non-replicable – or worse, invalid – a study should not make it into the next edition of the textbook – at least, it should be critically discussed there. But, apart from these more extreme cases, oversimplifying a study by omitting certain details is not necessarily damaging. Social constructivism (Vygotsky, 1978) as well as research on instructional scaffolding (Hogan & Pressley, 1997), for instance, would suggest that giving a full account of the entire scholarly discourse surrounding a particular study may overburden students as long as they do not possess a solid (broad) knowledge base.
That said, Bartels (2023) certainly has a point when he criticizes that some particular studies have a long history of being grossly and consistently oversimplified in textbooks. The Stanford Prison Experiment is indeed a good (yet, also well-known) example (Bartels, 2015). But with the information that is now out there (Haslam et al., 2019; Le Texier, 2019), this will certainly (yet, slowly) change. Moreover, initiatives like the Framework for Open and Replicable Research Training FORRT (Azevedo et al., 2022; see https://forrt.org/reversals) or the social psychology division of the German Psychological Society are compiling lists of the replicability status of classic (social) psychological effects that can be used to appropriately present the current state of research in teaching.
Second, there is also ample evidence for claim (b): Yes, scholars in psychology (but also in other disciplines) tend to be more liberal and “leftist” in their ideological convictions than the general public, and some are worried about the detrimental effects of such a “liberal bias” on scientific rigor and quality (see, e.g., Duarte et al., 2015). But, again, what exactly is the problem here, and how bad is it? Yes, more conservative researchers may feel alienated and rejected among a mainstream of leftists in academia (e.g., Everett, 2015). But where is the evidence for the claim that a lack of “viewpoint diversity” actually decreases scientific rigor and quality? In their commentary to Duarte et al. (2015), Gelman and Gross (2015) write: “We have seen no good evidence that social science fields with more politically diverse workforces have higher evidentiary standards, are better able to avoid replication failures, or generally produce better research” (p. 26). Brandt and Proulx (2015) put it even more succinctly: “it is not at all clear that more or less liberal research agendas produce different levels of false positives” (p. 20). In fact, a recent adversarial collaboration by Reinero and colleagues (2020) demonstrates that the political slant of psychological research papers is not related to the replicability of their findings. In sum, there is no evidence for the claim that a lack of “viewpoint diversity” makes science worse.
There is also no solid empirical evidence for claim (c), that liberal bias is causally responsible for the misinterpretation of studies and topics in textbooks and courses. Bartels (2023) merely illustrates the possibility of such a causal relationship with some anecdotal examples. These examples are far from being solid evidence for the claimed negative effects of liberal bias – moreover, they are inconclusive on whether they support a liberal bias altogether. Let us take a look at the textbook coverage of the Implicit Association Test (IAT). Bartels and Schoenrade (2022) found that “Only two textbooks (12%) mentioned automatic white preferences among African Americans taking the Race IAT” (p. 118). Why is this evidence for a “liberal bias”? Omitting the “preference for whites” effect in a textbook could just as well signify a “conservative bias”, given that conservatives might not like the idea that a pro-White bias among African Americans could simply reflect an artifact. Also, it is interesting to note that Bartels and Schoenrade (2022) insist that “the IAT… does not appear to be a strong predictor of behavior” (p. 113). Besides the fact the the IAT does not exist (there are many variants of the IAT), Bartels and Schoenrade (2022) could be blamed for being selective in their coverage of the literature and the interpretation of meta-analytic results here. Meta-analyses they chose not to cite in their paper paint a more positive picture of the convergent validity (e.g., Hofmann et al., 2005) and the predictive validity of the IAT (e.g., Kurdi et al., 2019).
To claim causality, one also has to convincingly rule out plausible alternative explanations. In that vein, Bartels (2023) concedes that some findings might be presented in an uncritical fashion merely to “demonstrate the relevance and applicability of social psychology” and due to their “utility in selling psychology.” Yet, he claims that these explanations do not apply to his examples. Coming back to the IAT example, this is a surprising conclusion, given the prominence of the IAT in demonstrating the “shooter bias” (Correll et al., 2014; Mekawi & Bresin, 2015) and its societal relevance in light of the discussion of police violence against people of color (POC). Further, there may be many other – arguably more plausible – explanations other than lecturers’ ideology for including the IAT in their syllabus. One is that the IAT provides a great example for the methodological breadth of social psychological measures beyond self-reports.
If Bartels (2023) were right and IAT research was a good example for the existence of a liberal bias in social science research, then we should witness a systematic suppression of empirical evidence challenging the IAT for its psychometric properties. This is clearly not the case: Papers in which the IAT is discussed in a most critical fashion have been and continue to be widely cited: Fiedler's et al. (2006) very critical paper has been cited 429 times, Blanton and Jaccard's (2006) paper has even been cited 877 times according to Google Scholar 1 . The scientific community, at least, appears to welcome criticism about the IAT – as scholars ought to do. The IAT is certainly not a good example for a suppressing force of a left-leaning research community. Neither is there evidence for a systematic downplay of research on prototypically “conservative” topics. Jonathan Haidt's claim that conservatives harbor a richer set of “moral foundations” than liberals do (e.g., Graham et al., 2009; Haidt & Graham, 2007) has certainly not been “silenced” or “canceled”, but is still widely received and discussed intensely (e.g., Kugler et al., 2014). Lee Jussim's book “Social Perception and Social Reality” (Jussim, 2012), in which he argues that stereotypes are not biases, but rather accurate reflections of reality most of the time, even received an award from the American Association of Publishers as best book in psychology in 2012. How is that reconcilable with the idea that liberal bias oppresses divergent views?
Coming back to Bartels’ (2023) idea of “indoctrination” in introductory textbooks, we arrive at a different conclusion. We do agree that many textbooks (and teaching) in psychology can do better in erasing scientific errors and improve contextualization of unreliable and implausible, yet “classical” research. The discussions in the field on the robustness of studies (Yarkoni, 2022) should definitely resonate in up-to-date textbooks. That said, the claim that these insufficiencies are caused by ideological indoctrination is unwarranted, unsupported by evidence and potentially distracting from a targeted scientific and serious debate on teaching and textbook quality. A much more plausible reason for the widespread oversimplification of studies and topics is that it is didactically much more appropriate to reduce the complexity about a topic in a first step (i.e., in an introductory course) – even at the cost of oversimplifying it – and then discuss the details in a second step – after students have acquired a solid knowledge base.
We are worried that leveling accusations of “indoctrination,” especially when the evidence is arbitrary, could pose the risk of further exposing our field to politicization; at least in the United States (right-wing) politicians have notoriously used similar rhetoric to misconstrue scientific evidence of socially contentious issues (e.g., climate change, evolution, and various public health issues) (Thorp, 2023).
Footnotes
Author Note
The authors have no conflict of interest to disclose. This work has been funded by the German Research Foundation (Deutsche Forschungsgemeinschaft DFG) in the context of the Priority Program “META-REP” (GO 1674/10-1), project no. 467852570.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Deutsche Forschungsgemeinschaft, (grant number Priority Program “META-REP” (GO 1674/10-1), pr).
