Abstract
When examining the changes in society and the concomitant changes in research methods in the last century, unquestionably qualitative inquiry has been superseded by quantitative methods and had to work to find its niche in the social sciences. Here, I explore the push factors that have made space for the establishment and legitimization of qualitative inquiry. I discuss what we are doing well in qualitative methods, then examine the status quo—present worries, concerns, and future trends. I present three major problems that need attention, critique, and resolution in qualitative methods to further strengthen our foothold as we move forward. Methodological development is one of the primary purposes of the International Institute of Qualitative Methods (IIQM). In closing, I examine the role of the IIQM in the global development of qualitative methods.
Over time, qualitative methods have changed, sometimes rapidly, other times slowly. Qualitative research had to edge its way into academia and is still earning its place. Initially, ethnography was prioritized because in an era of quantitative inquiry, anthropology was shamelessly qualitative and led by describing ethnographic methods from the late 19th century (Pelto, 2017). Over the decades, we expanded from ethnography to grounded theory—and both methods have since matured into several “styles” or types. Many methods have emerged and been popularized, for instance, phenomenology, focus groups, semistructured interviews, digital methods, and now, mixed and multiple methods that incorporate some form of qualitative inquiry. We have seen disciplinary support for conversational analysis, discourse analysis, and narrative analysis. Some methods have held their own perspective, but in others, shifts have occurred. Participant observation is now incorporating photovoice and video data. We have moved from using perspectives of strangers that interpret the experiences of others from the outside, to centering on the use of self as data, as in autoethnography. As a result, we use participants as equal partners in research, as in community-based participatory research.
We now have a rich menu of ways to access both the private, concealed realties and the public faces of humanness. Methods have developed, merged, have been subject to faddism, and sometimes, I suspect, elitism. But today I have an agenda: I want you to consider what drives these dizzying changes. Where did they come from? And where are we headed? And most of all, what problems are on the horizon for qualitative research as we move to the next phase?
I will now consider the underlying push factors of change that affect qualitative methods and the ramifications of these changes. Next, I will examine the status quo—the surge of “popularity” of qualitative inquiry, and what we are actually doing well. Then, to balance this, I will speak of present-day trends in qualitative inquiry. Finally, I will discuss my perspective of three major crises in qualitative methods. Methodological development is one of the primary goals of the International Institute of Qualitative Methods (IIQM). In closing, I will pass this list on to you, in the near future, to resolve.
The Push Factors of Change
Qualitative methods are definitely not static; there are methods with the same label that may appear in different forms—even when conducted by the same investigator or research team. Some of these changes in methods are rapid, spontaneous, and discrete; others are slow and obvious.
Change comes about because of advances in equipment easing and expediting data collection and analysis. The development of computers, recorders, cameras, and cell phones have enabled larger sample sizes (often huge), rapid data collection, and microanalysis of data bits. Often data are already available online, waiting for the right question. Software now transcribes and translates, making analyses more accessible and easier. I tell my students not to write their dissertations too quickly, for soon we will have predictive text that will do the bulk of the writing for them…
Changes come about because of the development of our knowledge base and perspectives about the phenomenon we are studying. Different theoretical explanations about what is going on compete with and replace the old. As new theoretical perspectives are developed, they are ignored or are discarded or solidified, changing our perception about what we see—and the world of our participants. These changes divert our research goals, and consequently our outcomes.
There are changing societal conditions—especially in health care as modes of diagnosis and treatments that change the nature of what we are studying, how we communicate those results, and the ramifications they have for our participants’ experiences of illness.
Change comes about because of the methodological push from outside qualitative inquiry. Challenges about the rigor of qualitative inquiry continue and inappropriate “solutions” continue to plague us, demanding our response to maintain the integrity of our methods.
Change comes because of the agenda and support from funders—for instance, National Institutes of Health (NIH’s) interest in mixed methods encouraged the use and the development of certain modes of inquiry over others.
These changes jar us out of complacency. Change may be good or bad or both good and bad. I ask you to evaluate these changes, consider them carefully, to support those that forward our research, and to argue against and resist those that weaken qualitative inquiry.
Yet today qualitative research is strong.
Qualitative research is experiencing a surge of popularity. Qualitative Health Research (QHR) receives more that 1,200 new manuscripts a year, and the acceptance rate is only 13%. 1 Sage has asked that we refer the “surplus” of manuscripts to other Sage journals, which, with the permission of authors, we do. This should be a delicious, celebratory time for QHR; let us consider what we have achieved.
Kudos: Great Things—What Are We Doing Well?
This wave of new qualitative research is not a phenomenon confined to North America; it is worldwide. Some regions have matured and now produce excellent research. Canada is certainly one of the leaders. 2 In Canada, you have international experts doing workshops, publishing texts and articles (both methodological and substantive), and mentoring and including qualitative inquiry in your graduate programs. You have international stars: in phenomenology, the van Manens 3 (plural) and their students and colleagues; in narrative inquiry, a group led by Jean Clandinin 4 ; in grounded theory, Judith Wuest and her research group; Sally Thorne (2016) developed Interpretative description, and the late Dorothy Smith’s (2005) development of Institutional Ethnography has placed Canada in the lead internationally. All of these Canadian “schools” are publishing excellent inquiry on a regular basis. Scientists also supporting qualitative inquiry include Joan Bottorff 5 and her colleagues at UBC, and the dissemination of foundational methods coming from the IIQM (especially Maria Mayan (2009) and Karin Olson in the Thinking Qualitatively Workshop Series 6 ). The IIQM continues to have global reach through the webinars, the continuing conferences, and journals (see https://www.ualberta.ca/international-institute-for-qualitative-methodology). We are indebted to Mitch Allen for his editorial support for our publications. 7 The application of qualitative results to clinical areas, from the model of Judith Wuest’s health interventions for abused women deserves international acclaim, 8 and Carol Estabrook’s Center at the University of Alberta, 9 with its focus on research translation, are both extremely important. While the focus of the IIQM was on the development of qualitative methods, its specialty was qualitative health research. 10 Qualitative inquiry in Canada has the support of Canadian Institute for Health Research, and judging by the disproportionately high submission rates to QHR, Canada has a very strong position in the qualitative world in applied health research.
Despite these successes, I am deeply concerned about the changing face of qualitative inquiry. Again, while this “success” should be a celebratory time, it is a time of concern. I have the feeling we are slipping off the peak of an iceberg. That is why we are here today.
Present-Day Concerns
Changes in methods are occurring at breath-taking speed and are rapidly and ultimately resulting in changes in research focus from in-depth profound theoretical research, to minutia and trivia, broad and thoughtful inquiry to content that is obvious and results that do not reveal anything new, and careful conceptual and theoretical development, to superficial description that skims over a topic. And paradoxically, some of these changes are being driven by an agenda purported to increase our standards of rigor.
What Are the Drivers Bringing About These Changes?
The first driver is a lack of investigator time and investment in the inquiry process itself. This arises from our lack of resources to do research. We neglect to conceptualize our data deeply, relying on superficial codes and adhering to familiar labels, rather than exploring the actual messages that our participants are communicating and linking and comparing these to established concepts. In our haste, we force ideas into words as single “codes” that fractionate data and are then reconstituted into patterns of dictionary similarity but entirely miss the meaning intended by our participants. We are misfocused. Multiple coders and inter-coder agreement for reliability puts agreement about mnemonic tags for data ahead of thoughtful discussion of the import and nuanced meaning of those data. If we have collaborative interpretation and team-based discussions that engage us in refining our insights, we’re far ahead. We’re making sense of data and carefully attending to validity. If we’re haggling over titles of tags that label those data, we’re sacrificing rich interpretation for tidy classification (Lauren Clark, personal communication, October 3, 2019).
The second driver comes from factors related to the institutional review boards (IRBs) surveillance of qualitative researchers. Fearing “harm” emanating from the human act of simply telling one’s innermost feelings to a researcher who listens—which is basically what an unstructured interview is—the IRBs are insisting that researchers produce a list of questions to be asked of participants in the interview. This requirement of the IRBs has produced a proliferation of semistructured interview research, which is now used as a method—with dire consequences. Semistructured interviews allow the researcher to control the conversation and keep it on course—the researchers’ course, of course. These interviews follow familiar and predictable pathways, obtaining short responses and ending with rote and shallow results.
The third driver is the problem of proliferation. The proliferation of qualitative studies has resulted in an interesting trend. In the old days, students spent a lot of time finding a new topic, one that did not appear in the literature. Of course, all of these gaps were eventually plugged, so the topics became more and more specialized. For example, consider the numbers of publications addressing HIV and stigma. This research was given a boost by an NIH call for proposals in the late 1990s, accompanied by available funding. This resulted in literally hundreds of publications exploring obvious links of HIV and stigma—an irresistible apple for qualitative researchers—as it interfaces with seeking care, caregiver–patient interaction, in different ages and genders and now in different ethnic groups, and expanded to more remote populations, globally. But in order to fit with the call for proposals, it resulted in the development of new concepts—for instance, “self-stigma” (whatever happened to shame?). Is it shameful to use a lay label in scientific research? So, in our rush for funding or publication, we obfuscate. We rename. We confuse. We reinvent.
The fourth driver is the development of mixed-methods. I was delighted with the formal introduction of mixed-methods, thinking “At last, qualitative inquiry will be fundable.” But let us explore the trends in this “marriage.” Despite some headway in integration of qualitative and quantitative methods, we still have problems. There is a tendency to split the two components and to publish the qualitative component without the quantitative (or vice versa, and publish the quantitative without waiting for the qualitative to be completed)—or to simply submit the preliminary qualitative component as a stand-alone article. The most common “red flag” I see written in qualitative articles from a mixed-methods study is: This is a part of a larger project.
When I ask authors “Where are the other studies?” I am told they have already been published (once, even 2 years before). However, because these studies are separated by method, and other components published in other journals, they are not mixed-method research in the integrated sense and often not even connected in reference lists. 11 Further, a descriptive study of opinions of approximately nine people cannot be the sole foundation for substantive clinical recommendations, changes in practice or…Thus, small sample studies, although they proliferate, are not usually implementable and do not make a substantial contribution.
Worries and Future Concerns
I will now take this opportunity to present three of what I consider the most important problems facing qualitative inquiry today. I place them on your shoulders to examine, attack, and resolve.
Worry #1: The Ramifications of Using Checklists to Ensure Validity
Recently, qualitative inquiry has been modified from guidelines and reduced to basic, simple steps, instructions for doing qualitative research. These steps have developed a plethora of articles providing checklists—lists of strategies that supposedly, if present in an article and checked on the list, ensure that an article is solid, reliable, and valid (see, for instance, the Consolidated criteria for reporting qualitative research [Tong et al., 2007].
Deep trouble
First, one list of strategies does not fit all methods. Criteria for each of our broadest division of qualitative methods, descriptive or interpretive designs, cannot be ensured by a single list. Some of the strategies necessary and acceptable for descriptive research may actually invalidate interpretive research. And this is closely aligned with my previously mentioned concern; strategies to ensure validity are directly connected to research aims and outcomes and differ for descriptive and interpretive designs. For example, one may train multiple coders, develop a code book, and conduct interrater reliability with hard, descriptive data. But such techniques are incompatible with and destroy processes of interpretation for interpretive inquiry. For interpretive inquiry, a researcher who has completed all of the interviews, and read, coded, and analyzed the interviews, may understand what is implied in a single quotation. Then, she or he may recognize what is meant in a single phrase and how that information supports or extends a category. A second coder, without such contextual insight, will read the quotation at face value only: Participant meanings, implied and inferred, are lost. Team research does not help with this problem. In interpretive inquiry, interrater reliability produces an invalid result and keeps research shallow. Under the guise of validity, it keeps the results descriptive, obvious, and trite (Morse, 1997).
Second, checklists imply to others that if these required checks have all been completed, then the research is rigorous. Even if that person making the decision regarding rigor has no knowledge of qualitative methods, by reviewing the checklist, they may consider the article to be solid and should be accepted. Superficial qualitative results are judged by shallow generic criteria, hence compounding and legitimizing the problem.
Ouch
I am afraid that even some editors are requiring these checklists when a qualitative article is submitted—and even some QHR reviewers are requesting them of authors in their reports. One more point: Excellent research is not achieved solely by the use of appropriate strategies or techniques. The skillful use of strategies only sets the stage for the conduct of inquiry. Excellent research requires excellent interpersonal skills in order to obtain good data. For data analysis, it requires knowledge of the literature, serious contemplation, cognition, conceptual, analytical, theoretical skills, and even the ability to write. [Although with advances in predictive text, this may not be so crucial in the future. My Gmail tries to write itself]. But researchers, if we believe in checklist validation, we are paradoxically losing control of “quality,” with dire consequences for the future of our research. Ask yourself, can one actually do excellent qualitative research using a checklist? Excellent qualitative research is a conceptual activity.
Worry #2: Incomplete Inquiry
This is a worry that is internal to the conduct of qualitative research. Several strategies of analysis have emerged that are rapidly replacing traditional methods, and as a result are truncating the process of inquiry itself.
Let us take this head on: Thematic analysis, as described by Braun and Clarke (2006) provides, as an outcome, themes and subthemes. It may show some relationship between these themes and subthemes.
Fine, but then what? Your study is not yet completed. What do these themes signify? Are the themes developed? Are they temporally ordered or substantively complete (as parts of a whole), or linked with the literature? Is conceptual or theoretical development possible? Are the themes developed adequately to be linked with the literature and returned to the appropriate context, either clinical or community, for implementation? Are they generalizable to other settings and situations? What use is this structure of themes? We have forgotten to attend to the “so what?”.
Braun and Clarke (2006) correctly present this strategy of thematic development as an “analytic method”—we are the ones who have grabbed it and embraced it as a complete method rather than a strategy for analysis, and apparently, according to Google Scholar, have cited it more than 61,000 times since 2006. I fear that this is simply an attribution of convenience on our part—even lazy note referencing. Similarly, content analysis is not a complete method in itself—as any ethnographer would tell you. But I am afraid that treating such thematic research as a complete method puts us on a “crash and burn” trajectory. We are supporting and publishing incomplete, weak research that “signifies nothing”, simply theming for the purpose of theming.
Again, the remedy for this problem is in our own hands. We must tell the reader what the generated patterns of themes represent. As well as meaning, there must be a reintegration of themes with the literature and theory and application arena. In your findings or discussion section, provide recommendations and solutions to be implemented for the population, ways to evaluate your recommendations, and recommend frames for application.
Worry #3: Glimmers. Research With Tiny Samples
Tiny sample questions of the century (and some of the last) have been “How many subjects will it take me to finish the study?” “How many dollars will this cost?” and “Can I afford to do this project without funding?” We underestimate our sample size because we are concerned about the limited availability of the people with the particular illnesses or conditions who may participate: “Can I find them? Will I get access? Will they agree to be in the study?” We may have other concerns such as self-doubt about even doing the project. For instance, students tell me that a semistructured interview is preferable to an unstructured interview. Knowing what to ask the participant with a semistructured interview is less intimidating than experiencing the uncertainly of waiting for (and hoping) the participants will to “tell their story.” Approaching 10 participants seems less daunting than 30, at least on paper, in the proposal. Oddly, qualitative researchers are less concerned about the influence of culture and are happy to have a sprinkling of cultures in their sample, as long as their participants speak English (see Morse, 1995). In qualitative samples, this merging results in a loss of cultural differences in data—which is exactly counter to the intent of NIH in their policy for inclusion of minorities in research. Respect of cultural differences means that data from each cultural group in the sample must be saturated and analyzed separately.
The bottom line appears to be “Lets design this project with the minimum number of participants.” Unfortunately, for this approach, norms are developing for sample size—usually less than 15 and often less than 6 participants. Sampling is not a matter of mimicking a published study, but a strategy that must be approached carefully in light of many factors unique your project, along with anticipating the ramifications of your sampling decision for the entire project.
The basic criteria for ceasing data collection are reported as the achievement of “saturation,” which is misunderstood—usually as the “emergence of no new data.” “No new data” is interpreted as data from one interview is repeated in the next and this criterion of replication is prematurely applied in the research process. Rather than using this redundancy as an indication for broadening the sample, or wondering why this replication occurs, researchers then close shop. We think having a smaller stack of interviews makes analysis easier and quicker; actually, the opposite is true—the fewer the indicators of the concepts and theory available, the more difficult it is to see “what you have.” The less data, the more shallow the analysis and the more trivial the results. The cost of incomplete analyses from inadequate data has enormous ramifications for your completed project.
Determining sample size depends on a number of criteria, and these differ for every project. To list a few:
The complexity of the questions/phenomenon being studied, with the tighter the research question, the smaller the sample; the more complex the question the larger the sample—or the more samples you will need for various components.
The scope of inquiry: Closely aligned with (a), be aware of the domain and boundaries of your topic. The tighter the scope of the project, the smaller the sample; the opposite is true. For instance, if you are describing how students with fear of examinations, perform when taking the exam, your sample will be smaller than if you included the entire process of examination: how these students prepare, behave on the day of the examination, respond in the exams and afterward, and when they receive their results. And of course, you may want to compare your “fear” sample with those who are confident and unafraid of exams.
The flexibility of the data collection: The less flexible the method, the larger the sample. A very structured method of data collection—a semistructured interview, for instance—requires a larger the sample as each participant is contributing a smaller amount of data. With unstructured interviews, participants have the freedom to express themselves; fewer interviews (or participants) will be needed.
The numbers of strategies to be used. Are you going to use the same participants for all of the strategies? Or different samples? Are you doing mixed methods with separate samples? Consider each sample for completeness. One cannot simply “tally” the number of participants from all components and report the group total.
Variation of participants: The more cohesive the participants, the smaller the sample. The more variations—including gender, age, and ethnicity, the larger the sample, for you may have to “saturate” each group.
The complexity of your research method, aims, and goals. The more complex the data collection strategies, the larger the sample.
Your own expertise with thinking qualitatively, knowledge of the literature, intensity of working and writing. The more experienced the researcher, the more practiced they are at analysis, conceptualizing and confirming the developing model.
The nature of the participants. The most common criteria used for sample selection are that participants speak the same language as the researchers. However, here are many more criteria to consider: Is it an acceptable research topic to speak/talk about with participants? Or is it “taboo” or socially unacceptable? Do your participants have the time, the physical energy, and the emotional energy to talk? Is your topic feasible for discussion with your participants? Can a private location for the interviews be found?
The bottom line: considering all of these factors, how can we fix a definite number of subjects for any methods or any given project? These factors are all unpredictable in your proposal. “How many subjects?” is actually a silly question when asked in qualitative inquiry.
The unique requirements of any project, if you really must put a “number of subjects” in a grant application budget, or in your IRB application, means guessing a higher number rather than low. For there is no penalty for overestimating enrollment, but if you underestimate and find you need more participants than requested mid-project, it takes time to obtain permission from the IRB to increase your sample, or is painful to be forced terminate the study because of lack of funds. Yet in our research, norms are developing for minimal sample sizes, even less than 6 for a semistructured interview. And the cost of using minimal samples is enormous for all concerned, when the result is inadequate, incomplete, and unpublishable research. Skimping on the numbers of participants is a risk that you cannot afford to take.
This is not the end of my list, but these are the most urgent tasks to be resolved.
Directing the course of qualitative inquiry is in your hands, for the ramifications of how we choose to do our research, determines the product. It determines the outcomes, what we publish, of what we reveal, and what we see or what we conceal. In QHR, there are additional stakes—high stakes, as the results of our research will mold the future of care, of how patients are treated, of what parts of the caregiving processes nurses and physicians attend to—or ignore. It determines the future standards of care and ultimately health outcomes.
Teaching Fishing
My underlying agenda for the first decade of the IIQM was “teaching fishing” (as in the proverb “teach a man to fish, and your feed him for a lifetime”). The role to the IIQM was to meet the needs of emerging and enthusiastic researchers, to provide ways to facilitate good research. The IIQM provided a platform for leadership in the education/training, the dissemination of, and development of qualitative methods. This was achieved through workshops, conferences and residencies, publications, and the conduct and teaching of inquiry.
The IIQM had an ambitious goal in the early days yet had tremendous reach.
Now, the important questions is, “Were we successful?”
Amazingly so—I do not think the University administration ever realized what a machine the IIQM was. The effects are still out there and are continuing to meet the challenges facing his new methodology.
The impact on Canada is evident. Consider your output, for until recently, QHR journal received a comparable number of submissions from Canada as the United States, despite the population differences. Several universities have developed independent centers, and a Canada Chair of Qualitative Inquiry was appointed for a number of years at the University of St. Thomas. 12 These are indicators of the embeddedness of qualitative inquiry and QHR in Canada.
Be Proud
By the end of the first 10 years, the influence of the IIQM—established global reach. From our nine international sites were linked to 115 cooperating sites (primarily universities); we spawned a large number of organizations, still operating a decade later:
The IIQM conferences held in Mexico, the IberoAmerica Association for Qualitative Inquiry emerged, and subsequently holds annual conferences in Central and South America and Spain.
Similarly, the Israeli Center for Qualitative Inquiry holds a biannual conference series in Israel, maintaining a high standard of qualitative research.
Until recently, biannual QHR master and classes conferences were held in Bournemouth, England, creating a large number of experts in the UK and Europe.
Soeul, Korea, holds annual conferences, has developed an Association for Qualitative Researchers, translated numerous books into Korean, publishes a journal in Korean, and even developed a multidisciplinary Academy of Qualitative Researchers.
Our site in South Africa offered workshops all over Africa and as far north as Jordon, paired its sites in South Africa with the researchers in impoverished universities in African countries and supported their research. The University of Johannesburg graduated more than 100 PhD students with skills in qualitative inquiry.
And, the ultimate compliment, our model has been replicated in the United States by Norm Denzin as the International Congress of Qualitative Inquiry (ICQI). The ICQI holds an annual congress, workshops and publishes texts from these presentations, at the University of Chicago, Champaign–Illinois, IL.
As a result, qualitative inquiry is no longer a tentative endeavor, conducted by a minority of researchers. Qualitative inquiry is now embedded globally into the social sciences—and its future is over to you.
We still have a way to go. But we have much strength:
Look around you. Who is sitting on your left? Who is sitting on your right? Look behind you and in front of you. Make new friends and contacts. Make changes wisely. Monitor what is happening in qualittive inquiry. Work on these problems together.
For future of qualitative health research is in your hands.
Thank you.
Footnotes
Author’s Note
This state-of-the-art address was given following the presentation of the Lifetime Achievement Award, at the 25th Qualitative Health Research Congress, Vancouver, British Columbia, Canada, October 27, 2019.
Acknowledgments
I thank Dr. Marnie Wood, Dean Emeritus, Faculty of Nursing, The Council for Health Science Deans, University of Alberta, and the Alberta Foundation for Medical Research, for their support in establishing the International Institute for Qualitative Methodology at the University of Alberta on February 23, 1998. Dr. Norah Keating Chaired the IIQM Board of Directors for the first decade, to 2007.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
