Abstract
Psychological science has long maintained a preference for quantitative methods over qualitative methods. The allegiance to one methodological family and the rejection of another means that at least, in part, the field’s methods are constraining the universe of research questions it is willing to ask. In this article, we provide an overview of mixed-methods research, which involves the use and integration of both qualitative and quantitative methods, and why psychology should do more of it. The focal audience is quantitatively oriented researchers who are interested in—and perhaps even skeptical of—the role of qualitative methods for their work. The article consists of three general sections: (a) a brief discussion of philosophical issues underlying the application of mixed-methods research in psychology, (b) a deeper examination of what constitutes “quantitative” and “qualitative” research, and (c) a description of four major mixed-methods-research designs that hold promise for psychology research. We provide researchers with broad conceptual foundations and concrete tools for how research questions in psychology can be mapped to different mixed-methods designs, helping correct for researchers’ lack of exposure and/or negative preconceptions that have inhibited uptake in the field.
Psychological science has long maintained a preference for quantitative methods over qualitative methods. This preference has been explicit—some journals have expressly indicated that qualitative methods are not welcome (Gergen et al., 2015; Levesque, 2021; Yoshikawa et al., 2008). This preference has also been implicit, revealed through graduate training programs that routinely require courses in quantitative methods yet rarely provide any opportunities at all for training in qualitative methods (Rubin et al., 2018).
There are many problems with this dynamic. Most centrally, the allegiance to one methodological family and the rejection of another means that at least, in part, the field’s methods are constraining the universe of research questions researchers are willing to ask and therefore the research that is represented in the field’s journals (Power et al., 2018). Moreover, questions that focus on the what, how, and why of psychological phenomena—questions that are of core interest to many psychologists—are particularly well suited to qualitative methods. Rather than starting the research process with a clearly articulated research question and then using the optimal method to address it, research questions are being tailored to fit the dominant quantitative methodological approach. In turn, research is assessed relative to quantitative benchmarks rather than the extent to which the research question has been optimally addressed. These features of the scientific ecosystem perpetuate the dominance of quantitative methods and marginalization of qualitative methods.
It does not have to be this way. Mixed-methods research, which involves the use and integration of both qualitative and quantitative methods, has been promoted as a pragmatic, “question-driven” approach to conducting research (Gorard, 2010). Mixed-methods research holds great promise for psychology, but because of lack of exposure or training, asymmetrical publishing norms, and/or negative preconceptions, many researchers are unfamiliar with what mixed-methods research is, why it is valuable, or how to do it.
The purpose of the current article is to provide an overview of mixed-methods research in psychology. The structure of the article consists of three major sections: (a) a brief discussion of philosophical issues underlying the application of mixed-methods research in psychology, (b) a deeper examination of what constitutes “quantitative” and “qualitative” research, and (c) a description of four major mixed-methods-research designs that hold promise for psychology research.
We aim to build on past treatments on mixed methods in psychology that have, in our view, been limited in two primary ways. First, discussions of mixed methods tend to provide a somewhat muddled philosophical and conceptual rationale, which insufficiently communicates the broad applicability of mixed methods across psychology. Second, previous work has grounded their discussion of designs in motivations to achieve particular goals in specific research contexts, such as social psychology (Power et al., 2018), developmental psychology (Yoshikawa et al., 2008), environmental psychology (Lewis et al., 2020), cross-cultural psychology (Schrauf, 2018), educational policy (Cooper, 2011), close relationships (Braithwaite et al., 2014), and health psychology (Bishop, 2015). Although these efforts are certainly useful, our goal is to provide a broader perspective on mixed methods that is applicable across the field. Accordingly, we seek to provide researchers with broad conceptual foundations and concrete tools for how research questions in psychology can be mapped to different mixed-methods designs.
We bring our perspective as researchers who do not prioritize one method over the other and who have experience with various forms of quantitative, qualitative, and mixed-methods research. Nevertheless, our primary audience for this article is mainstream, quantitative researchers in psychology. Some who fit that description may be quite hostile to the idea of incorporating qualitative and mixed methods into their research (for a discussion, see Lieber & Weisner, 2010,). Others may be interested but are unsure of how to do so. Still others may already use qualitative and mixed methods in their research but do not have a strong background or framework that supports it. This article is for all of the preceding. Researchers who are firmly grounded in qualitative and mixed-methods research may have little interest in what we have to say and may even disagree with some of our arguments (for some potential objections and counterarguments, see supplemental document at https://osf.io/y54gc; see also Yanchar & Williams, 2006). Our goal here is to provide a general treatment that is grounded specifically in psychology and that uses the language of the mainstream quantitative psychologists, with the hope that we can begin to move the needle toward methodological diversity in the field.
Philosophical Issues Underlying Mixed-Methods Research
Engaging seriously with mixed-methods research requires confronting the philosophies of science that underpin researchers’ understanding of different methodologies. We discuss some of these core issues here, but our treatment is necessarily brief and superficial. Although sketches of the central issues are provided to situate our arguments, for additional background, we highly encourage readers to consult the works cited and the recommended reading listed at the end of the article.
The very notion of “mixed methods” as a named methodological orientation came to prominence via the so-called “paradigm wars” of the 1970s and 1980s, in which there were heated debates about the relative merits of quantitative and qualitative methods across various disciplines in the social sciences (Alise & Teddlie, 2010; Bryman, 2008; Oakley, 1999). This debate was not merely about methodology; rather, it was fundamentally an argument about the philosophical paradigms viewed to underpin the methods, that is, postpositivism and constructivism (Alise & Teddlie, 2010; Ponterotto, 2010). In brief, postpositivism is a philosophical approach that views truth and reality as relatively stable and singular and that researchers have the ability, albeit limited, to access this truth via methods of observation and experimentation. In contrast, constructivism holds that there is no singular truth in the world and that reality is coconstructed through human activity and grounded in social and cultural contexts and practices. Mixed methods was proposed as a solution to the debate about the relative merits of these two paradigms, rejecting the apparent need to align with postpositivism/quantitative or constructivism/qualitative in favor of adopting pragmatism as the guiding paradigm (for more details, see Gillespie et al., 2024). That said, the word “pragmatism” is not used consistently by individuals who advocate for mixed methods (Johnson et al., 2007; Johnson & Gray, 2010). Some use this term in reference to a coherent philosophical paradigm rooted in the ideas of Peirce, James, and Dewey—one that values truth, experience, and interaction; prioritizes contributions to the social good; and challenges the notion that realist assumptions must be tied to objectivist truth (Biesta, 2010). Others describe mixed methods as pragmatic because of an implied flexible “do what works” approach that can involve intermingling of both methods and paradigmatic views (for discussions, see Bishop, 2015; Ghiara, 2020; Gillespie et al., 2024; Hathcoat & Meixner, 2017).
The inconsistency in how pragmatism is used leads to an unclear philosophical foundation for mixed-methods research. Adopting one or the other of these definitions would not likely help matters. Pragmatism as a philosophical paradigm is limited in that it provides a worldview that may be inconsistent with the goals of some researchers (as is true for all paradigms), allowing mixed methods to be rejected solely on philosophical grounds rather than the potential applicability of the methods. The do-what-works approach is limited both in that it fails to confront the challenges of integrating different paradigms (e.g., postpositivism and constructivism) and perpetuates the notion that qualitative and mixed methods are freewheeling enterprises rather than families of clearly articulated, systematic frameworks for design and analysis (Biesta, 2010). Unfortunately, because foundations in philosophy of science are not part of standard psychology graduate training or seen as especially relevant to the daily work of research psychologists, there are damagingly incoherent views related to paradigms and methods that underlie understanding of mixed methods. This tends to be the case for both the advocates and the skeptics.
In particular, a major barrier to recognizing the value of mixed-methods research is the conflation of what constitutes paradigm and what constitutes method (Gorard, 2010; Madill, 2015; Syed & McLean, 2022). A paradigm represents the overarching framework of beliefs and assumptions that guide research, such as postpositivism, constructivism, and criticalism. These paradigms consist of specific assumptions about ontology, epistemology, axiology, and methodology, all of which guide researchers’ practice (Ponterotto, 2005). Although paradigms include beliefs about the optimal methods for achieving research goals, the methods themselves—quantitative, qualitative, and mixed methods—are not paradigms. Moreover—and critical to the point of this article—all methods can be used in a way that is consistent with each paradigm (Westerman & Yanchar, 2011). Whereas quantitative methods are more prevalent and often more suitable in the postpositivistic paradigm, qualitative and mixed methods can also be used (Bishop, 2015). For example, Svensson and Syed (2019) conducted a comparative qualitative analysis of youths in the United States and Sweden to understand how identities are developed and negotiated in different immigration contexts, relying on standardized coding systems for the narrative data that were applied in the same way to both samples. Likewise, although qualitative and mixed methods may be more suitable for constructivist and criticalist paradigms, quantitative methods can also be used (e.g., QuantCrit; Garcia et al., 2018). For example, Suzuki et al. (2021) provided examples for how mixture modeling—a statistical technique for identifying latent subgroups—can be integrated with critical theory by engaging in reflection when generating research questions, planning analyses, and interpreting findings. We contend that the conflation of “paradigm” and “method” is one reason for the slow uptake of qualitative and mixed methods in psychology (for details, see Rogers et al., 2024).
We see the unfortunate conflation of “paradigm” and “method” in both the methodological literature on mixed methods (e.g., Creswell & Plano Clark, 2017) and among individuals in psychology advocating for qualitative and mixed methods (e.g., Landrum & Garza, 2015; Masaryk & Stainton Rogers, 2024; Power et al., 2018). Even Bishop (2015), who recognized the distinction between paradigm and method, largely endorsed the alignment as appropriate because of, in part, the complexity in sorting out methodological and paradigmatic entanglements. Moreover, in an otherwise excellent article on mixed-methods research in developmental psychology, Yoshikawa et al. (2008) scantly discussed philosophical or paradigmatic issues at all.
This conflation has rather substantial implications for research practice in psychology. The dominance of the postpositivistic paradigm in the field leads to a skepticism about the value of constructive and critical approaches, treating them as though they do not constitute “good science” (Lyons, 2009; Rogers et al., 2024). The alignment of constructivism/criticalism with qualitative methods means that qualitative methods can be dismissed on philosophical grounds—that they are outside the realm of acceptable inquiry—rather than evaluated on their methodological merits. Qualitative methods, when seen as a paradigm, are too subjective and antithetical to folk notions of what science ought to be (Lyons, 2009). The concepts of reflexivity and subjectivity are seen as a threat to “objective” quantitative research (Reischer & Cowan, 2020) despite the fact that ostensibly objective quantitative research involves a mountain of unrecognized subjectivities (Jamieson et al., 2023) and arbitrary decision-making (Rosnow & Rosenthal, 1989). Moreover, the alignment reifies existing methodological hierarchies in the field. Experiments are the coin of the realm for psychologists, but there is nothing inherently quantitative about an experiment and nothing that necessitates the use of quantitative assessments to evaluate their outcomes (Power et al., 2018). Nevertheless, the two methodological approaches, which themselves are broad categories that subsume various methods, are seen as being from different paradigms and having different goals. This is not right.
These are heady issues that can be discussed, at length, on their own (for further treatments, see Gorard, 2010; Madill, 2015; Rogers et al., 2024; Syed & McLean, 2022). Our rationale for covering this philosophical ground is to raise awareness of how the conflation of “paradigm” and “method” closes psychologists off from exploring all available methods. This has been one of the major limitations of the past treatment of mixed-methods research in psychology (e.g., Bishop, 2015; Hanson et al., 2005; Power et al., 2018; Yoshikawa et al., 2008). We invite readers to reject this inappropriate conflation and open their mind to how mixed methods might be applicable to their own research domains.
Because this article is intended for a mainstream audience in psychology, which is both heavily quantitative and postpositivistic, we largely examine what mixed-methods research might look like in that context. The framing of what follows may look quite different if one assumes a constructivistic paradigm, and individuals who are working from that perspective may take issue with some of our arguments. A longer discussion of this issue is available in the online supplement at https://osf.io/y54gc. Although we believe that the ideas and arguments in the subsequent sections hold relevance for all researchers in psychology, regardless of their paradigmatic commitments, a significant gap remains: There are still too few resources on how postpositivistic psychology can better integrate mixed methods. Addressing this gap is precisely what motivated us to write this article.
Thinking Differently About What Constitutes “Quantitative” and “Qualitative”
The fuzziness of definitions
An observant reader will note that we have not provided a clear definition of what constitutes quantitative, qualitative, and mixed methods. This is because doing so is no clear matter and requires detailed discussion (see Johnson et al., 2007). Indeed, the simple definitions of terms are that “quantitative” pertains to numbers, “qualitative” pertains to text, and “mixed methods” pertains to the integration of both, but such definitions belie the complexity that is necessary to truly understand the concepts (Allwood, 2012). In many ways, we view this section of the article as the most important and useful for researchers, teachers, and consumers of psychology because it helps bring clarity to the methodological landscape that they inhabit.
The first key point is that like nearly everything people treat as binary, the distinction between quantitative and qualitative is not one of discrete kinds but, rather, one of degree. A simple example illustrates this point. Say researchers gather a group of participants in a room and present to them an ostensibly funny photo, such as that presented in Figure 1. The researchers believe that this photo has some humor value but are interested in assessing how humorous the participants find it to be. They have a variety of approaches they could use to conduct this assessment, ranging along a continuum of quantitative to qualitative.

Illustration of how assessments of how funny a picture is can be purely quantitative (e.g., reaction time), purely qualitative (e.g., semistructured interview), or something in between (e.g., Likert-type scale). The image is of limecat, which has no known source. We often refer to her as Maria.
At the most extreme quantitative end, the researchers could assess something such as how quickly the participants laughed at the photo. A quick reaction time could indicate that the participants genuinely found the picture to be funny, whereas a delayed one could indicate that they felt a social obligation to laugh because of the context even if they did not think it was particularly funny. Alternatively, researchers could measure the decibel level of the laughter. Both of these examples are on the extreme quantitative end because they can be measured as numerical representation of underlying numerical quantities—time and volume.
At the most extreme qualitative end, the researchers could conduct semistructured interviews or focus groups with the participants, asking them whether they found the picture to be funny, why it is funny, why it might not be funny, and so on. Doing this would generate text data that would pertain to the participants’ subjective impressions of the quality of the photo, with no inherently quantitative properties.
There is, however, space between these two extremes, space that is occupied by the approach taken by very many psychologists, particularly researchers interested in individual differences: questions that rely on Likert-type response options. The researchers could have an item such as, “How funny is this picture?” and include response options ranging from 1 = not funny in the slightest to 6 = the funniest thing I have ever seen. They could have several similar questions that they ask and then create a “humor scale” that is the average score on the items, finding that the picture has a humor score of M = 3.46, SD = 0.85. The researchers treat this approach as “quantitative,” but really, it is a blend of quantitative and qualitative (Axinn & Pearce, 2006; Landrum & Garza, 2015). They are prompting participants for their subjective response to the picture, fixing their possible qualitative responses to the prompt, and then assigning numbers to those responses. The numbers that are assigned are arbitrary and do not correspond to any meaningful quantity (Kazdin, 2006), and thus, there is nothing inherently quantitative about this approach. The researchers assigned values to the responses and then analyzed them statistically, so treat them as quantitative, but the method is essentially a structured interview with fixed response options. Indeed, once upon a time, such an approach was considered to be a qualitative method (Brower, 1949; Michell, 2011).
Through this example, we show that the distinctions made between quantitative and qualitative methods are not as clear-cut as many psychologists may be inclined to think. Rather, there is a need to distinguish between the types of data that researchers work with and the methods that they use to analyze them, an issue that we explore in further detail in the next section. This distinction is all the more necessary in light of the strong divisions and value claims that researchers often ascribe to methods when treating them as paradigms.
The independence of data and analysis
In both casual discussions and methodological treatments, references to “quantitative methods” and “qualitative methods” abound. Such references obscure critical distinctions between quantitative and qualitative data and quantitative and qualitative analysis (Axinn & Pearce, 2006; Levitt et al., 2018; Yoshikawa et al., 2008). Here, we make the case for the distinction between data and analysis and why the distinction matters. Among other reasons, recognizing the distinction helps show how many researchers are already engaging in some forms of mixed-methods research, perhaps without realizing it.
If the nature of the data and the nature of the analysis are treated as independent of one another, we can create a visual field that produces four quadrants (Fig. 2). For simplicity’s sake, we betray the previous section and treat quantitative and qualitative as discrete kinds, but the arguments advanced here easily accommodate our continuous conceptualizations.

A schematic showing data and analysis as independent dimensions, each varying in its quantitative and qualitative properties and creating four quadrants of combinations of types of data with types of analysis.
Two of these quadrants in Figure 2 will be easy for psychological researchers to understand and recognize. When researchers have quantitative data and conduct quantitative analysis on those data, this is generally what is referred to and thought of as “quantitative research.” When researchers have qualitative data and conduct qualitative analysis on the data, this is generally what is referred to and thought of as “qualitative research.” All very simple and straightforward. But what about the other two quadrants, where the data and analysis methods are discordant?
We take the easy one first, in which there are qualitative data but they are analyzed quantitatively, which is an approach that is widely used throughout the field (Fakis et al., 2014). Researchers commonly gather qualitative data, be it text, video, web content, or something else. Then, they code those data for some features of interest, an approach that “quantitizes” the qualitative data (Sandelowski et al., 2009), and subsequently enter those quantitized data into statistical models. Examples of this approach are easy to conjure quickly. In developmental psychology, central constructs such as attachment and identity have relied on interview-based assessments that were used to generate data that could be coded and analyzed statistically (Kroger & Marcia, 2011; Main & Goldwyn, 1984). In personality psychology, the vast majority of research on narrative identity gathers stories from individuals, codes them for a variety of features, and then analyzes them statistically (Adler et al., 2017). Quantitizing qualitative data is a common and acceptable approach throughout the field, highlighting how qualitative data are not seen as a problem as long as they are analyzed quantitatively.
Now the fourth quadrant looms, likely creating some degree of confusion or uncertainty. This quadrant pertains to quantitative data that are analyzed qualitatively. To some, this might seem absurd or even impossible. Yet it, too, is quite common in the field but seldom recognized. Qualitative analysis involves interpretation, and that interpretation can be applied to any kind of data. In other words, any time researchers take a set of data and make some kind of meaning of it, they are conducting qualitative analysis. A widespread application of this approach is in the context of factor analysis, particularly exploratory factor analysis, in which the factors are not predetermined. Once the analyst is reasonably confident in the number of factors and the items that load on each, the next step is to interpret the factors and label them. This is, in fact, qualitative analysis of quantitative data; it involves taking a set of observations and then synthesizing and interpreting them. In other words, it involves identifying the underlying theme that covers the observations, much like one might do when performing a thematic analysis (Braun & Clarke, 2006). Many readers of this article have conducted such factor analysis, and so we pose a question: What formal procedure have you used to interpret and label the factors?
The answer for everyone will be the same: They used none. Rather, the standard procedure is to squint at the items, wave one’s hands, and generate a label based on their preconceived notions and/or personal desires for what the factors ought to be. This is qualitative analysis, and it is qualitative analysis done poorly. The implications of this practice are substantial because once a factor receives a label, one tends to focus more on that label versus the underlying items that define it, and the label thus serves to reify the construct and make it “real” (Hathcoat & Meixner, 2017; for a related discussion, see Fried, 2017). The same general issue arises with groups created via cluster analysis, latent class analysis, structural topic modeling, and the like, all of which involve major interpretations as the analyst moves from solution to labeling and represent a form of “qualitizing” quantitative data (Landrum & Garza, 2015). As an entirely different example, what is the discussion section of empirical journal articles other than qualitative analysis of quantitative data? Discussion sections involve synthesizing and making meaning of patterns of data; they are the interpretations of what researchers believe they found, what it means, and possibly how it could be used in policy and practice. This is a form of everyday qualitative analysis hiding in plain sight.
Our purpose in explicating the four quadrants describing intersections of data and analysis is manifold. First, the two are frequently conflated in published work and everyday discussions on methods, yet it is clearly important to separate them. Second, thinking in this way helps reveal how psychologists, perhaps unwittingly, are already conducting some kinds of “mixed-methods” research (see also Maxwell, 2016). In many ways, this is unavoidable for all researchers (see Braun & Clarke, 2006). Even the most strident quantitative researchers will discuss their phenomena of interest in qualitative terms because psychological phenomena are inherently qualitative and require some degree of interpretation. Likewise, even the most strident qualitative researcher would struggle to avoid all references to quantities when representing their phenomena of interest (e.g., references to prevalence, whether communicated as frequencies or as descriptors such as “more/less common”). Finally, it helps expose the specific types of research considered unacceptable or less appropriate for the goals of the field, that is, qualitative data analyzed qualitatively (see Gergen et al., 2015; Lyons, 2009). A question to reflect on, though, is if three of the quadrants are acceptable, is there any good reason for why the fourth should not be? In the next section, we describe specific ways that such approaches can be fruitful for the goals of mainstream psychological research, particularly in the context of mixed-methods designs.
Mixed-Methods-Research Designs
The central argument for mixed-methods research is that regardless of one’s paradigmatic commitments, study design should follow from the research question of interest, and therefore, all psychologists ought to become more familiar with mixed methods. Furthermore, it is important that this familiarity includes an awareness of the varied combinations of qualitative and quantitative components and an intention to be deliberate in making methodological decisions about how to combine them (Creswell, 2021; Levitt et al., 2018). There are several different rationales for why researchers might pursue mixed methods as an optimal approach to address their questions, including a perceived need to enhance the study, first explore qualitatively, explain quantitative findings, or generalize their findings. Each of these motivations is associated with specific mixed-methods-research designs, and thus, it is critical that researchers align their research question, motivations, and study designs.
Most readers are well aware that “quantitative methods” are not a singular entity but, rather, subsume a broad array of specific methods that generally involve applying some statistical procedure to numerical data. The same is true for what we refer to as “qualitative methods,” which consist of both methods of data collection, including interviews, focus groups, and observation, and methods of data analysis, including narrative, phenomenological, and thematic analysis. Each of these methods has its own norms, assumptions, procedures, and conceptualizations of rigor and how to achieve it (Creswell et al., 2007).
The variety of quantitative methods available paired with the variety of qualitative methods available necessitates that there will be a large number of mixed-methods approaches when combining the two. Nevertheless, our focus here is not on those types of combinations but, rather, the methodological structure of the mixed-methods-research designs as they align with specific types of research questions. It is also important to be aware that there is no single taxonomy of mixed-methods designs (Bishop, 2015). Moreover, just as understanding of quantitative analyses evolves with subsequent work, the labels and descriptions of mixed-methods designs change over time, even among the same authors (e.g., Creswell & Plano Clark, 2017). Here, we elaborate on the four types of core mixed-methods designs described in detail by Creswell and Plano Clark (2007): triangulation, embedded, explanatory, and exploratory (Fig. 3). We find that these four designs have the broadest applicability to psychology researchers and that awareness of them can help motivate their own mixed-methods studies. In this section, we rely heavily on Creswell and Plano Clark (2007), a classic textbook in the field that is now in its third edition (Creswell & Plano Clark, 2017). We strongly encourage interested readers to consult this book and the other recommended readings provided at the end of this article.

Graphical descriptions of the four major mixed-methods designs, indicating the sequencing and weighting of the quantitative and qualitative components relative to one another. Adapted from Creswell and Plano Clark (2007).
A few considerations are in order before beginning the review of the four designs. First, because these are all mixed-methods designs, they do require some sort of mixing or integration of the quantitative and qualitative components (Fàbregues et al., 2024; Leech & Onwuegbuzie, 2010). The mixing can occur in the sampling (e.g., quantitative and qualitative data are collected from the same participants), the analysis (e.g., directly connecting the quantitative and qualitative data or analyzing the same data source both quantitatively and qualitatively), and/or the interpretation (e.g., the meaning of the results relies on the integration of the quantitative and qualitative analyses). Studies that rely on quantitative and qualitative methods but that do not mix are sometimes referred to as “multiple methods” rather than “mixed methods” (e.g., using qualitative and quantitative methods in distinct phases of the study to address separate aims; Creswell & Plano Clark, 2017).
Second, mixed-methods-research designs differ in the timing of the quantitative and qualitative methods (Creswell & Plano Clark, 2017). Generally, the timing can be concurrent, in which the two methods are conducted at the same time, or sequential, in which the methods are not only conducted in sequence but also the design features of the second are dependent on the outcome of the first.
Finally, mixed-methods designs vary with respect to the weighting of each component; some designs place either the quantitative or qualitative component as dominant, and others place the two on equal footing (Gorard, 2010). We highlight these three considerations—mixing, timing, and weighting—through our discussion of the four designs. In the discussion here, we focus primarily on the specifics of design aspects; empirical examples of each design are provided in Table 1.
Example Empirical Studies for Each Mixed-Methods Design
Triangulation design
The general principle of the triangulation design will be familiar to most psychology researchers: Do data from multiple sources converge toward the same interpretation? Triangulation via the multitrait-multimethod matrix has long been seen as a “gold-standard” approach to assessing construct validity (Campbell & Fiske, 1959). Whereas the multitrait-multimethod matrix consists of patterns of correlations and thus relies on data that are represented quantitatively, the triangulation mixed-methods design pertains to the use and integration of quantitative and qualitative methods and analysis to draw inferences about the research question.
In a triangulation design, the quantitative and qualitative data could be collected either concurrently or sequentially, depending on the researchers’ goals and how they intend to mix the data sources. Indeed, mixing could be accomplished in the sampling, data analysis, and/or interpretation phases. That is, the same participants could be administered a rating-scale instrument and complete semistructured interviews that cover similar ground, and thus, the quantitative and qualitative samples are the same and are intermixed systematically to address the same research question (e.g., Kim et al., 2013). Alternatively, interviews could be conducted with a subset of participants who completed the surveys or different participants altogether. The two sources of data could be integrated via quantitizing the qualitative data and conducting statistics (if coming from the same participants), maintaining the qualitative data and using a joint display (Guetterman et al., 2021), or some other method to support metainferences (Creswell, 2021). Regardless of which of the preceding approaches are used, the integration of the qualitative and quantitative components is central to the interpretation of a triangulation design, and the whole purpose of the design is to examine the degree to which disparate sources of data are congruent. For this reason, the two sources of data are typically weighted equally.
Embedded design
The embedded design shares some similarities with the triangulation design in that a central goal of both designs is to use different sources of data to support convergent interpretations. Indeed, in more recent versions of their textbook, Creswell and Plano Clark (2017) placed triangulation and embedded designs under the broader header of convergent design, minimizing the substantive difference between the two. We break from that change in organizational structure and maintain the distinction for a few reasons. First, as we explore, there are some key differences in the designs that warrant keeping them separate. Second, and more important for thinking about mixed methods in psychology, one variant of the embedded design is the most commonly observed mixed-methods design used in the field and is the one that researchers seem to default to when they do not have strong backgrounds in mixed-methods design. For this reason, it is instructive to treat the embedded design separately from triangulation to better communicate with one’s intended audience.
Similar to the triangulation design, with the embedded design, there is some flexibility in the timing and mixing of the components. That is, depending on the research questions and goals of the project, data could be collected concurrently or sequentially, and mixing could be accomplished during sampling, analysis, and/or interpretation. What sets the embedded design apart from triangulation is the weighting of the quantitative and the qualitative methods. Whereas the two are set on equal footing in triangulation, with embedded designs, one method is primary, and the other is secondary. This fact gives rise to two variants of the embedded design, one in which the quantitative is primary and one in which the qualitative is primary.
The quantitative-primary embedded design will be familiar to many producers and consumers of mixed-methods research in psychology. In this design, the research questions are primarily addressed via quantitative analysis, and the associated methods drive many of the choices regarding sampling, analysis, and interpretation. A qualitative component is added as supplementary, as a way to augment what is provided by the central quantitative methods (e.g., in the context of randomized controlled trials; Gaugler et al., 2021; Plano Clark et al., 2013). When researchers include a qualitative component to a primarily quantitative study to “give voice to the participants,” “flesh out the results,” or seek to “bring their findings to life,” they are using the embedded design (e.g., Cooper et al., 2005). Likewise, when journal editors and reviewers comment that the qualitative component “seems tacked on” or “does not clearly add anything to the study,” they are typically reacting to a poorly communicated embedded design. Authors must clearly convey the value of the qualitative component and how it contributes to one’s understanding of the findings.
The qualitative-primary embedded design will be less familiar to most researchers in psychology because it pertains to a project that is guided and dominated by the qualitative component and the quantitative playing a secondary, supplementary role. Although this design is seldom observed, it can be used to great effect, and indeed, its use could potentially facilitate greater appreciation for and uptake of qualitative methods in psychology. Note that the qualitative-primary embedded design is different from simply providing the frequency of themes or some other quantitative summary of an otherwise qualitative analysis. One could think of that as a type of mixed method, but the behavior is infrequently situated in a specific mixed-method design with associated rationale. Rather, the quantitative component in an embedded design provides a broader context for understanding the primary qualitative findings. This could include some quantitative data from the participants or the broader pool from which they were drawn to better understand how the participants in the qualitative analysis fit in the broader distribution of the phenomenon of interest (e.g., Robinson, 2019). Alternatively, institutional, population, or demographic data could be included as an additional source of information to provide context for the qualitative analysis (Cooper et al., 2005). Because quantitative analyses are often prioritized by psychologists, it is particularly important when researchers use this type of design to clearly indicate up front that the role of the quantitative data is to bolster the qualitative research findings rather than for it to be the central focus, as it typically is.
Explanatory design
The triangulation and embedded designs are both focused on the convergence of the quantitative and qualitative data (Creswell & Plano Clark, 2017) and can be implemented in myriad ways depending on the research question and goals of the researcher. The explanatory design is distinct in purpose and in character. The explanatory design is, by definition, a quantitative-focused design that is conducted sequentially. The quantitative component is completed first, and then the qualitative component is designed and executed based on the results of the initial quantitative study. As the name implies, the purpose of the explanatory design is to use the qualitative component to help explain the findings of the quantitative component.
A classic aphorism in quantitative-focused research is that studies tend to raise more questions than answers. Why did the control group show greater gains than the treatment group on the target outcome? Why are these two seemingly unrelated variables so highly correlated? Why is there so little change in response to a major stressor? The explanatory design can be deployed to address these types of questions. Moreover, the design can be used effectively to identify potential mediators (i.e., explanatory mechanisms) and moderators (i.e., contextual modifiers) that could be examined in subsequent quantitative projects (Bishop, 2015).
Use of the explanatory design by quantitatively minded researchers would certainly help advance their research programs in ways that would not be realized when relying on quantitative data alone (see Syed, 2015; Turner et al., 2025). Note, however, that doing so requires training and intention. Qualitative analysis is not “anything goes” but, rather, involves specific sets of conceptual and methodological procedures that need to be followed, just as with any quantitative analysis (Creswell et al., 2007; Rennie, 2012).
Exploratory design
The exploratory design is the reverse of the explanatory design. It is also conducted sequentially, but it is qualitative-focused. The qualitative component is completed first, with the goal being to explore the construct or phenomenon of interest that is subsequently examined using quantitative methods. The exploratory design can be used to address several researcher needs and motivations. First, sometimes, there is simply a need to explore qualitatively as an initial step. Conducting an initial qualitative study, whether it is a pilot study or a full stand-alone project, can be incredibly informative in its own right and provide a solid direction in which the researcher can proceed with further studies (Bishop, 2015). This general approach will be familiar to many psychology researchers because it is frequently used in the context of developing new psychological measures, in which researchers engage in interviews or focus groups with members of the target population to improve item generation and refine a set of items (Boateng et al., 2018; Coyle & Williams, 2000).
An entirely different motivation addressed by the exploratory design is the need to generalize. It is often stated that generalizability is not a goal of qualitative research, but this is incorrect, or at least overstated (Smith, 2018). This is true for some studies using some qualitative methods but is certainly not the case for the entirety of qualitative research. Whereas some qualitative methods are focused on the generalizability or transferability of theoretical propositions (see Robinson & McAdams, 2015), some do seek to make generalizations about the prevalence of observations, associations, or effects. The key is to examine the claim that the authors are making from their data rather than believing that methods are, by necessity, tied to specific inferential goals. Conducting a quantitative study following a qualitative study can help address the generalizability of the qualitative observations. Of course, quantitative studies have their own problems with generalizability (Yarkoni, 2022), but through combining qualitative and quantitative methods, more traction for generalizability of research findings can be gained (Syed & McLean, 2022).
Finally, the exploratory design is well suited for situations in which researchers have a need for cultural adaptation of their research materials. It is well known and widely documented that the vast majority of psychological research is based on samples with limited representation of domestic and global diversity (Roberts et al., 2020; Thalmayer et al., 2021; Yarkoni, 2022). Accordingly, researchers are often confronted with a challenge when seeking to use a previously developed scale or intervention with a new population. In such cases, a preliminary qualitative project would be valuable to assess the suitability of the measure or intervention and to determine the features that must be adapted (for examples, see Frisén & Wängqvist, 2011; Juang et al., 2023,).
As with the explanatory design, greater uptake of the exploratory design would undoubtedly increase the rigor and quality of researchers’ research and interpretations, providing a descriptive lens from which to better assess constructs of interest. Indeed, there is a broad need to observe and describe psychological phenomena before testing specific hypotheses designed for prediction (Scheel, 2022; Syed & McLean, 2022).
Mixed-methods studies versus mixed-methods program of research
The preceding four mixed-methods-research designs pertain to single studies or, sometimes, multiple studies that researchers report together all in a single article. However, we wish to highlight the utility of engaging in what has been labeled as a “mixed-methods program of research” (Creswell & Plano Clark, 2017; McKim, 2017). This approach refers to a general orientation that researchers take toward their work on a particular topic, in which some studies may be quantitative only, some may be qualitative only, and some may be mixed methods, but each study is deliberately interconnected and mutually influential.
In outlining how mixed methods could be used to advance research in social psychology, Power et al. (2018) argued that a mixed-methods program of research is consistent with classical views of the field. They conceptualized a program of research as a recursive process in which different studies using different methods are synthesized and triangulated to continually check and refine findings, interpretations, and assumptions. Syed (2015) provided a description of this approach for the study of racial/ethnic identity, detailing specific examples of how small observations in qualitative studies were followed up and tested in subsequent quantitative studies and how peculiar findings in quantitative studies led to subsequent qualitative explorations to gain a deeper understanding.
As argued by Syed and McLean (2022), making greater use of qualitative and mixed-methods research in programs of research could help address issues that have arisen through both the replicability and generalizability crises (Open Science Collaboration, 2015; Yarkoni, 2022). In particular, qualitative methods are well suited for gaining a deeper understanding of psychological constructs, which can then help inform measurement and how to attune to contextual variation (Silan, 2019). Unfortunately, because of the dominance of quantitative methods in the field, reform efforts have largely failed to take perspectives from qualitative and mixed-methods research into account (for recent exceptions, see Reischer & Cowan, 2020; Steltenpohl et al., 2024). In addition to the methodological benefits, drawing from these literatures could help facilitate greater engagement in reflexivity about the choices researchers make in their research process, which is much needed in quantitative research (Humphreys et al., 2021; Jamieson et al., 2023; Rogers et al., 2024; Tafreshi et al., 2016).
Beyond the substantive value of this approach, we have found through our experiences and observations that using designs that align with the goals of our overall research program—rather than existing within the constraints of a particular design—helps keep us grounded and fresh. The value we have found in actually interacting with our participants directly and hearing their voices and experiences cannot be understated. Doing so reminds us that our data points are actual complex people with actual complex lives and provides a fullness to the interpretation of our quantitative findings. Conversely, in qualitative methods, because we are hearing from the participants directly about their lives, we are often drawn to the most unique and interesting cases. The reality is, however, that most people are thoroughly average, so outsized attention to such cases can distort our understanding of the psychological phenomenon of interest (Morse, 2010). Moving between the qualitative and the quantitative can give us a better sense for how our observations fit with a broader distribution, calibrating our interpretations (Yoshikawa et al., 2008). Beyond these scientific rationales, we note that existing within the liminal space between qualitative and quantitative methods simply keeps us attentive and engaged with the work we are doing—we seldom get bored.
Conclusion and an Invitation
The intentional use of mixed-methods research continues to be scarce and underused in psychology. In this article, we have tried to examine the philosophical and methodological misunderstandings that have contributed to this problem and described four core mixed-methods designs that hold great promise for the future of psychological science.
We contend that there is no compelling scientific rationale against using mixed-methods research. Rather, it is only assumptions, socialized beliefs, and entrenched practices that are closing researchers off to a whole world of methodological approaches. Yoshikawa et al. (2008) referred to the problem as “methodocentrism,” and others have described how researchers’ beliefs in their methods are so strong that they often do not fully recognize just how much they guide the scientific decision-making process (Greenwald, 2012; LeBel & Peters, 2011). These beliefs have led to a system of dissemination that erects further barriers to publishing mixed-methods work because researchers’ increased familiarity with quantitative methods can bias them against mixed methods, which then reinforces a cycle in which people are not trained or rewarded for applying mixed methods. Moreover, both learning about and carrying out qualitative studies are often seen as taking time and effort that is difficult to justify in the current structure that incentivizes rapid and voluminous publication records.
We also do not contend that using mixed methods is always appropriate for all research questions or contexts or that engaging in mixed methods is easy and accessible for most researchers. Being sufficiently competent in quantitative methods, qualitative methods, and how to mix them necessarily stretches researchers’ capacity for learning and developing expertise. This problem can be addressed through greater collaboration and use of a team-science model (Hall et al., 2018). Mixed-methods research can also take a long time to carry out, especially when using sequential designs in which one phase is dependent on the outcome of the previous. Moreover, publishing mixed-methods articles can be difficult at journals that lack relevant expertise and have strict and prohibitive page limits. These limitations point to the many structural factors that limit the uptake of mixed-methods research and the need for greater use of mixed methods to put pressure on the structures to change.
We invite psychology researchers to take our arguments seriously and consider where their beliefs about appropriate methods come from and the rationales they rely on to uphold those beliefs. To be clear, we are not advocating that all researchers engage in mixed methods all the time. Rather, we are advocating that all researchers see the value of mixed-methods research, engage with the philosophical and conceptual foundation of mixed methods, and consider how a mixed-methods design might help move their research forward. Ask yourself: What do you have to lose?
Footnotes
Acknowledgements
The article builds on a workshop first given by Moin Syed at the 2011 meeting of the Society for the Study of Emerging Adulthood, Providence, Rhode Island. It has been given many times subsequently, most notably since 2012 as a part of a course at the University of Gothenburg, Sweden. The foundational lectures, which track closely with the content of this article, are freely available for use in teaching and workshops at
. Many people have contributed feedback on the ideas presented here over the past 14 years, including Maria Wängqvist, Ann Frisén, Johanna Carlsson, Philip Hwang, Linda Juang, Kate McLean, and Ursula Moffitt. Special thanks to Kate McLean and Hollen Reischer for helpful comments on an earlier version of this article. All errors and perspectives remain our responsibility alone. Dulce Wilkinson Westberg is now at the University of California, Davis.
Transparency
Action Editor: Yasemin Kisbu-Sakarya
Editor: David A. Sbarra
Author Contributions
Recommended Reading
The following are useful resources for individuals interested in learning more about mixed-methods research and how to apply it to their work.
Creswell, J. W. (2021). A concise introduction to mixed methods research. Sage.
Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed method research (3rd ed.). Sage.
Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 26–46. ![]()
Tashakkori, A., & Teddlie, C. (Eds.). (2010). Sage handbook of mixed methods in social and behavioral research (2nd ed.). Sage.
Watkins, D. C. (2022). Secondary data in mixed methods research. Sage.
