Abstract
One of the raging debates in organization study concerns the use of “templates” in qualitative research. This curated debate brings together many of the players in that debate, who make statements of position relative to the issues involved and trade accusations and counter-accusations about statements they have made that in their view have been misinterpreted or misconstrued. Overall, it is quite a lively debate that reveals positions, points of tension and grounds for disagreement. Denny Gioia wrote the triggering essay that prompted other players to weigh in with their personal and professional views.
Introduction
Denny Gioia
In 2019, I got wind of a special issue of Organizational Research Methods dealing with the use of “templates” in organization study. Previously, in 2013, along with Kevin Corley and Aimee Hamilton, I had published a methodological article in ORM entitled, “Seeking Qualitative Rigor in Organization Study: Notes on the Gioia Methodology” (Gioia et al., 2013), which was an explanation of the philosophy behind, as well as procedures for executing a methodology I had been developing for more than 20 years at that point. Citations of that article have skyrocketed since then, as scholars apparently had been waiting with bated breath for a systematic qualitative methodology that could lend more rigor to their analyses and presentations of case studies.
I had been aware, however, that some scholars had been treating that methodology as a kind of cookbook to dress up their research reporting to make it look more rigorous than it actually was, so I figured (correctly, as it turned out) that the methodology was going to be a target of some of the contributors to the special issue—especially because I had also learned that several editors and reviewers had been counseling authors that they needed to provide more convincing evidence for their assertions and that they should consult that piece for a look at a methodology that might enable them to do just that.
So, I approached the editors of the special issue argued that my work was likely to be a focus of one or more of the contributors to the issue, and that I should have an opportunity to defend myself against some likely criticism. The editors consulted among themselves and then surprised me by turning me down, instead suggesting that I write a review of studies that had used the methodology I had been advocating for so many years. That sounded to me like an eminently boring thing to do, so I said no, that I was not interested in writing that kind of typical and typically boring academic piece.
I tried to figure out why they would not want to take advantage of an offer that would likely bring more visibility to the rather arcane topic that was the subject of their special issue. How might I explain such a vexing stance?—which I viewed as shortsighted, at best, lacking marketing savvy at minimum, or perhaps as an effort to prevent my voice from disrupting a conversation focused only on the evils of using templates, at worst.
Wind the clock forward; the special issue comes out online. Sure enough, my fears were confirmed, as I read each piece in turn. Not only was the methodology I had been developing over all those years a prime target of many of the authors, but one piece even took my name in vain in the title! Many pilloried the whole notion of a template and indicted me as the prime purveyor of the offensive notion (by the way, I consider myself “Not Guilty” of these charges, as you will see in the essay below). What's a body to do in the face of such merciless criticism?
I have an idea. In the wake of the rejection by the special issue editors, I’ll email the current general editor of ORM and plead my case with him. A reasonable tactic, no? I argued that I had been convicted in absentia of an ostensibly egregious (non)crime that a gross academic injustice had been done to my good self, and that the whole business of discussing the merits of templates was likely to drop out of sight unless we scholars could generate a little controversy and heat around the topic. Truth is, I was kind of proud of myself for the articulateness of my argument.
I suggested that I write a commentary about whether templates are a good thing (the opinion of many scholars looking for a rigorous approach to doing qualitative case studies) or a bad thing (the opinion of many of the contributors to the special issue). His response? An academic raspberry. He decided that he should not end-run the special issue editors, so he first consulted them (!), who, predictably enough, said something to the effect of: “We already told this guy ‘no’ once, and he apparently cannot take no for an answer, so he's trying to finagle and finesse you. Tell him no; maybe he’ll go away.” So, the editor did just that. In no uncertain terms. No. No. No. Did you hear me? No.
Hmmm. Now what?
How about another bright Idea? I’ll approach Richard Stackman, innovative editor of Journal of Management Inquiry, and propose that he consider assuaging my offended sensibilities as the basis for a lively piece for JMI. Ever the accomplished negotiator and resident provocateur, he made me a counteroffer: Given that he and Pablo Martin de Holan (his editorial partner in crime at JMI) had just instituted a new “curated” section of the journal, would I consider curating a debate among myself and some of the other scholars who had pointed an accusing finger in my direction, as well as those who have used the methodology to good effect? Would I do that? “Oh! Please, please don't throw me into that briar patch!,” was my furtive response (resurrecting Br’er Rabbit—see Harris, 1880).
So, I invited some of the contributors to the ORM special issue whom I had singled out as misguided sinners to write a rejoinder to the essay below, as well as a few scholars whom I considered to be potential advocates for the approach, plus Ann Langley (a scholar well known for her strong opinions on the conduct of qualitative research who fessed up as the likely originator of the “template” term) and Kathy Eisenhardt, another well-known scholar who uses qualitative methods to investigate important questions via a multiple case study approach. I also invited David Silverman, whose approach I had criticized, to write a rejoinder. Interestingly, some of the authors who assailed me or otherwise contributed to the special issue declined my invitation to be a part of the debate. More is the pity; they apparently were unwilling to get involved in a spirited little contretemps (with the notable exception of Mees-Buss, Welch & Piekkari, for whom I reserved my most pointed criticism (as you will see)). They are good sports and I heartily thank them. Everyone else readily agreed, apparently welcoming a few linguistic fisticuffs.
Do you like a good little academic fight? Then read on …
Coming Out Fighting: Addressing Critiques of a Systematic Qualitative Methodology
Denny Gioia
I am given to reflecting on a number of issues about how we go about doing research following a recent special issue of Organizational Research Methods, ostensibly dealing with the use of “templates” in qualitative organizational research. Because a methodology I have been developing and honing for over 30 years now was a target of many of the works in that special issue, I thought we should start by looking at the bigger picture and using the occasion to reconsider some of the approaches and practices we have historically employed to develop our bases of knowledge.
There are several principles underlying my approach to understanding the organizational world, including the assumptions that (1) these worlds are essentially socially constructed; (2) the people who inhabit them are knowledgeable; (3) we as researchers of these worlds must show the evidence for our assertions about them; and (4) the primary aim of qualitative research is to present a plausible, defensible explanation of the phenomena we observe. Allow me to briefly address each of these principles because they are key to the arguments and critiques that follow.
First, my most fundamental assumption is that ontology implies epistemology implies methodology. 1 Ontology concerns assumptions we make about the nature of the organizations and the people in them. Epistemology concerns assumptions about how we know about organizations and their actions. Methodology concerns assumptions about how we study organizations and the people concerned with them. We are talking methodology in this curated debate, but to get there my belief is that we should first briefly articulate our ontological and epistemological assumptions. So, here are mine:
My most basic assumption is that organizing and organizations are essentially socially constructed processes and accomplishments. What's that mean, anyway? Just this: at an elemental level I see organizations as structurational creations (Giddens, 1984)—meaning that organization members (agents) engage in actions that create structures, which recursively enable and constrain further action. But, it is a human agency that starts this ball rolling, so the agency has primacy. Once the ball is rolling, people treat the structures they create as if they were real, and act accordingly. 2 At the heart of this view is the assumption that social “reality” is socially constructed (Berger & Luckmann, 1967). One consequence here is that people can be seen as making stuff up, and then treating the stuff they’ve made up as structures and processes that govern future action according to “rules” that they themselves have consensually fashioned. So, we need to deal with that “reality” in our studies.
A second pivotal assumption I invoke is that people at work are knowledgeable. They have a good sense of what they’re doing, how they’re doing it, and why they’re doing it. Perhaps most importantly, they are capable of explaining what, how, and why they are doing their work to us as clever, if occasionally naive researchers. They are not Garfinkel’s (1967) cultural dopes; they are not structural or procedural dopes, either. We thick researchers too often lose sight of their knowledgeability and seem to presume that we know more than they do. And we blithely do all this when it is the experience of managers that we are trying to understand. Why do we do such a curious thing? As a character in Shakespeare in Love said repeatedly about curious explanations for actions: “It's a mystery. …”
However, we theorists and researchers are not dopes, either. We might understand things about their experience in different terms than they do and even produce a different kind of insight into understanding their experience, but our understanding does not necessarily supersede their understanding. What, then, should we do if there are differing accounts of experience? That sort of question leads directly to my next assumption: We should report both forms of understanding, via a systematically derived data structure that accounts for first-order (informant-based) and second-order (researcher-based) understandings in tandem.
Such a data structure not only enables reporting from multiple perspectives but also allows a way of demonstrating evidence in support of conclusions a researcher might draw (thus addressing a major critique of qualitative research—those qualitative researchers are too often cavalier about showing data-to-theory connections). This sort of approach counters the damning accusation that qualitative research is too often too impressionistic and helps answer the epistemological question, “How do you know what you think you know anyway?” Or, perhaps the question is more appropriately phrased as, “How do I know that you know what you say you know?”
My final assumption concerns the purpose of discovery-oriented research. Simply put, I view the purpose of (especially interpretive) research not as aimed at generating a “correct” answer to a research question (that's what positivist/functionalist research does), but rather at generating a plausible, defensible (abductive) explanation of how and/or why a phenomenon occurs (see Gioia et al., 2013). Showing the evidence that supports an inferential conclusion is critical these days, regardless of the paradigm with which you approach organization study (see Burrell & Morgan, 1979).
Summarizing a Systematic Methodological Approach
One consequence of the above assumptions is that I begin all analyses with a depiction of informants’ understandings of their work (because I assume they are knowledgeable about that work). Although we cannot often have a first-hand understanding of informants’ experience, we nonetheless should try to represent that experience by accounting for their understandings, and that means portraying informants’ experience in their terms, not terms we theorists and researchers might prefer to use. However, an informant-level view is not the only viable view. So, we also need a more “theoretical” view (such that any resulting portrayal can be transferable to other domains—which is one reason why a good theory is so very useful—see Lewin, 1943). Researchers therefore should zero in on concepts and their interrelationships (including those that even informants might not be aware of). The larger point is that we theorists should present findings in ways that represent essential informant experience and present systematic evidence that can adequately represent a more theoretical view.
How to do that? My answer is that we qualitative researchers should provide some sort of “data structure” that shows how the informant-based (first order) codes relate (similarly or differently) to researcher-based (second order) themes and dimensions (see Gioia et al., 2013; cf. Van Maanen, 1979). I am known around my shop for a little saying that emphasizes how seriously I advocate working from a data structure. That statement is: “You got no data structure; you got nothin’.” Now, I don't mean that flippant saying literally, per se; I mean that you as a researcher need to figure out some way of providing convincing evidence that supports your conclusions.
Subsequently, I show how the data structure captures and provides evidence that addresses the main research question by coordinating the data structure with supporting tables, appendices, and figures (especially the figure depicting the emergent theory—and, these days, I tend to show the grounded model up front as a way of facilitating understanding and memorability). Developing all these ancillary accouterments is not merely a tactic for presenting “rigorous” analyses; it also helps me consider the entire research process from soup to nuts. [As a quick sidebar, I might note that there is a debate around presentational tactics. On one hand, interpretive research is often grounded research, wherein the theory is grounded in the data and understandings of the informants. That implies that any emergent theory should be presented after reporting the data that generated the theory. This mode of presentation uses what I might term a “storytelling” model. And we all love a good story, preferably with a surprise ending. But, that's not how typical human understanding works. Most people understand better if they have an advanced framework into which they can place information. Given that one of the goals of the research is to foster understanding, I now often violate the tenets of qualitative/interpretive research and report the grounded model before reporting the findings (with an explanation for why I am employing such a format). This latter approach follows what I term a “journalist’” model—which I summarize as: “tell ‘em what you’re gonna tell ‘em; then tell ‘em; then tell ‘em what you told ‘em,” which is actually a reasonable way of depicting our usual (if implicitly understood) approach to writing the introduction, findings, and discussion sections. For that reason, if memorability is your game (and it should be), the journalists’ model usually works best].
One of my main purposes these days is to show how the (static) data structure relates to the (dynamic) inductive model; to do so, I emphasize relationships between the first- and second-order readings. My metaphorical position is that if the (static) data structure represents a snapshot (a still photo) of the emergent theory, then the (dynamic) grounded model represents a motion picture version of the theory (see Gioia, 2019). The upshot of all these elaborate procedures is to generate a grounded model that explains the hows and whys of the phenomena of interest. The ensuing grounded model generally does three things: (1) it tends to affirm some existing concepts and relationships (we would only be surprised if the emergent model did not affirm some prior notions because a lack of affirmation would imply that previous research found nothing notable, which is absurd); (2) it tends to extend existing knowledge by augmenting or expanding what we already know; and (3) it tends ideally to generate new concepts or ways of seeing.
Critiques of the Gioia Methodology
I intended the methodological approach I am suggesting primarily as a means of enabling ways of knowing about new concepts and new ways of seeing old concepts (see Gioia et al., 2013). Critics of the approach argue that: it's too structured; it's too stylized; it undermines the potential for creativity; it weakens the opportunity for researchers to show their insight; it is actually a functionalist approach in disguise (!), etc., etc. Nonsense. Just nonsense. There is a great opportunity for insight in the approach I am suggesting, but it is an opportunity within reasonable bounds, because I insist that researchers account for informants’ interpretations and constructions of reality in their reporting. In that sense, it is a kind of “freedom with fences” (see Bowlby, 1988 for the original articulation of the basic notion; see Kaufman et al., 1999 and Stenzel, 2011, for popular labels of the idea). It also amounts to a way of executing Weick’s (1989) notion of “disciplined imagination.” Furthermore, the “freedom with fences” phrasing captures the paradox that structure confers freedom. As a former doctoral student with a military background once said, “For me, structure is freedom,” meaning that structure gave him freedom to operate within reasonable bounds.
In my view, the critiques of the methodology I am proposing too often have not paid adequate attention to the philosophy-of-social-science assumptions behind the methodology 3 and their manifestations in practice. I initiated the development of the approach because of (justified) criticisms that qualitative research in the ‘70s, ‘80s, and ‘90s was overly impressionistic and did not provide adequate evidence in support of assertions made. I simply evolved a more systematic approach that asks qualitative researchers to show consideration for appropriate ontological and epistemological assumptions, to treat people at work as knowledgeable, and to show the evidence for their claims. Properly executed, the methodology need not be afflicted with any of its alleged shortcomings. Adhering to the tenets of such a systematic approach simply requires that researchers show the basis for their claims. I really don't care how researchers show their evidence, so long as they show it, rather than just telling the reader about it (see Golden-Biddle & Locke, 2007; Locke & Golden-Biddle, 1997).
Truth be told, I began developing this methodology as a kind of survival strategy—necessity being the mother of invention and all that—in the sense that it was necessary for me to show reviewers who had little knowledge of qualitative research (let alone interpretive research) that I knew what I was talking about. Actually, I welcome critiques so long as those critiques are based on accurate representations. The “Gioia Methodology” (not my label, by the way) seems to have come under fire because its wide adoption has allegedly limited how we go about studying organizational phenomena. Some of those critiques are well considered; others seem to be bogeyman inventions—straw man assumptions stemming from researchers imagining their own worst fears.
Perhaps the most pervasive critique is that “It's a template!” Even the call for papers for the special issue of Organizational Research Methods on the subject characterized the rise of templates as part of a “worrisome” trend, expressing a fear that using templates would somehow compromise qualitative research's traditional power for discovery. The editors of that special issue (Kohler et al., 2019) lamented that structured approaches to data presentation would have unintended negative consequences for novel theorizing. Frankly, I view this criticism as overblown, but it has stayed with us. In the political realm, a long-standing tenet is that “perception is reality” (which is a succinct restatement of W.I. Thomas’, 1923, original observation that if a [person] interprets a situation as real, it is real in its consequences). Nonetheless, via relentless repetition, it appears to have become received wisdom that the Gioia or Eisenhardt or Langley methodologies should be seen and used as “templates.” 4
My rejoinder to the characterization of the Gioia et al. approach to research as a template is simply this: I never meant the methodology to be used as a template, I have said so repeatedly, and I’ll say it again here: The methodology I have been developing over all these years should not be treated like a cookbook recipe. Nonetheless, other scholars have replaced their own creative reasoning with a cookbook-like application of the Gioia-methodology procedures as a surface-level substitute for “rigor” in their own analyses. Cornelissen (2017) got it exactly right when he noted that this sort of use of any template can act as a straitjacket that limits flexibility and creativity in theorizing. Yet, as Harley and Cornelissen (2020), as well as Locke et al. (2020), so clearly point out, rigor does not reside in a template but in the application of an approach that is rigorous. Any template might supply scholars with a protocol for analyzing and presenting their findings, which they then can use as a justification for doing anything they damn well please. Feyerabend (1975) made a mini-career out of arguing sarcastically that, methodologically, “anything goes.” The approach I am advocating is, if anything, counter to the notion that anything goes. Properly employed, the methodology says, in effect, that researchers cannot just make stuff up; they must show the evidence for their assertions.
Still, I often marvel at how other scholars apply the methodology inappropriately as they treat it as a kind of template. They lift the first-order/second-order/data structure terminology and apply it in ways that even I do not recognize. (I am withholding names here, to protect the guilty.) I find myself simply amazed at how often people use the methodology inappropriately because they purloin and apply the labels to their own techniques—an apparent manifestation of the old linguistic wisdom that once author(s) describe her/his/their beliefs in writing, they no longer own the interpretation of those words. My notes-to-self after reading many of the critiques is that they are making monsters that do not actually exist if one executes the methodology appropriately.
My position is that explanations should somehow account for informants’ own interpretations because, as noted earlier, my experiential assumption is that people at work are knowledgeable about what/how/why they are trying to do something, so we researchers need to account in some fashion for the statements of knowledgeable informants interpreting and understanding their own constructions of reality. After that, researchers can offer their own gloss on first-order data. All I am really trying to do is to get people to show your evidence. I really do not care how you do it; “Just Do It!” (as some Nike corporate avatar so eloquently put it).
A number of scholars have also decried the use of more systematic qualitative procedures as somehow inhibiting innovation. Lê and Schmid (2020), in particular, hold that there has been too little methodological innovation in qualitative research, which is a fair enough criticism, and they cite a number of exemplary qualitative studies that do actually display innovative methodological techniques, which I think is terrific. I might argue, however, that focusing on methodological innovation amounts to looking in the wrong place. 5 The germane question should instead be: Have the methodological techniques generated innovative theoretical findings? In my mind, inciteful findings outstrip innovative methodological techniques every time, unless those innovative techniques are relevant to uncovering innovative ways of seeing—and all the studies Lê and Schmid cite meet that high-bar criterion. Locke et al. (2020) argue that there need be no established sequence to coding practices. I do not disagree with them, either, except to say that, given my belief in knowledgeable informants, coding needs to account for a description that is adequate at the level of meaning of those informants. Thereafter, researchers are free to offer any justifiable interpretation they wish, so long as that interpretation is data-based and somehow relates to informants’ interpretations, even if it offers a contrast. I object to any portrayal that implies that the researcher's interpretation is more “correct” than that of the informants. Such a stance permits researchers to make stuff up under the guise of “better” (nee “alternative”) interpretations; that is simply not OK with me.
The approach I am advocating allows for quite a lot of creativity, as well. I have slyly labeled the latitude for insight as a “Gestalt analysis” (Gioia & Thomas, 1996) or a “Shazam!” (Gioia, 2004, 2017) or, in the case of a big insight, a “Grand Shazam!” (Gioia, 2021) but these are all deceptive labels for freedom-with-fences license to bring the researcher's own insights to the research. I find it cringeworthy when people use the methodology as a recipe or a step-by-step template, but there is plenty of latitude in this approach for achieving rigor.
Perhaps the most passionate objection to the methodology I have been advocating is by Mees-Buss et al. (2020), who rightly characterize the methodology as “naturalistic” in its roots. On the other hand, these authors also accuse people of using methodologies like the one I have been developing as employing some sort of “naive induction.” That terminology is insulting both to informants (whom they apparently treat as unsophisticated) and researchers (who are arguably portrayed as easily duped). I also think it is inaccurate, given the opportunities for insight conferred by a proper application of the approach I propose.
Mees-Buss et al. (2020) call for a return to a hermeneutic orientation. They characterize this orientation as adopting an appreciation for and a critical stance toward the interpretation of data, framing the researcher as an “interpreter” who can offer an alternative way of seeing that is not predicated on first-order interpretations. 6 I will readily agree that researchers can offer ways of seeing that are inciteful and informative and should be included in a presentation of findings. Silverman (2017, 2020) has also been concerned about the pitfalls of taking informant accounts as accurate explanations. He would have us grant precedence to a researcher's account as somehow more omniscient and credible than those of knowledgeable informants, arguing that researchers have a different way of seeing that informants themselves cannot see.
Behind this sort of argument is a kind of academic arrogance that offends my sensibilities. Of course, we should account for a researcher's different and informative way of seeing, but why rank it above a knowledgeable informant's account? This is exactly why the methodology I have been developing employs both first-order and second-order accounts. It always, always, always acknowledges the ways that knowledgeable informants see their own worlds. Researchers can roam from there—but not into the realm of fanciful interpretations that grant researchers an unlimited license to be brilliant. Researchers have some license, but only with explanations that also account for the informants’ ways of understanding.
The practical fact is that we need to start somewhere, and I would much prefer to start with the informants’ understandings. 7 Researchers have latitude to offer alternative insights, of course, so long as they also acknowledge the informants’ ways of seeing. If we follow Mees-Buss et al.'s (2020) argument to its logical conclusion and maintain that regardless of how open and informed informants are, they are not sufficiently reliable interpreters of their own experience and, therefore, that the construction of their own reality is not a good basis for theorizing, we end up in a very questionable, slippery-slope place. We would be in the position of dismissing a knowledgeable informant's own interpretation of her or his experience, such that we think we should substitute our own (ostensibly better) interpretation. If you have this orientation, I caution you to be very careful about what you call for. …
Zilber and Zanoni (2020) complained that even research via organizational ethnography seemed to be growing ever more distant from the lived experience observed in organizations by organization members. And that is one of the main reasons I insist that researchers account for first-order understanding. My dictum is that researchers do not have unbounded freedom of interpretation. The approach I am advocating makes data-to-theory linkages transparent, but it also allows latitude to make insightful (albeit plausible) explanations for phenomena observed. In my mind, a key methodological attribute conferred is flexibility.
As a parting shot, I love the question ostensibly posed by hermeneutic researchers: “What is really going on here?” (Mees-Buss et al., 2020). The stealth question behind that question is, however, “Who gets to define ‘reality’?” The hermeneutic answer would seem to be, “The researcher, that's who.” Think about that. It's a stance that could be viewed as condescending, especially when (more so than other researchers) the hermeneutic researcher should most recognize that there are multiple possible/multiple plausible realities. And that raises the nontrivial question of whose reality should be given precedence. My answer: neither and both.
Conclusion
Qualitative research is usually aimed at representing messy, complex, phenomenal worlds in the form of simplified theoretical models. That pursuit implies that we theorists and researchers should become sensegivers by simplifying the complex in essential ways (that's what models inevitably do anyway). In my view, we should practice this kind of “simplexity” and convey simpler ways of understanding complex concepts—although I acknowledge that I am using the notion of simplexity in a somewhat different fashion than Colville et al. (2012) used it. Still our charge as researchers should be to (1) use appropriate ontological assumptions that acknowledge much of organizational reality as socially constructed; (2) treat organization members as knowledgeable and, therefore, to adequately represent their experience; (3) show evidence in support of our assertions; and (4) present plausible, defensible explanations of the organizational experience we are trying to understand. Simple, yes? But, difficult to accomplish in methodological practice.
Initiation, Diffusion, Reification, Pushback, and Adjustment: Templating as an Ambivalent Process in Qualitative Research
Ann Langley
I defended my PhD using qualitative methods in 1987. That was some time before Kathleen Eisenhardt developed her approach to theory building from multiple case studies (Eisenhardt, 1989a), and before Denny Gioia published the early ethnographic studies that led to what came to be known as the Gioia methodology (Gioia & Chittipeddi, 1991; Gioia et al., 1994). With no qualitative methods training in my doctoral program, I confess that I struggled quite a bit, and especially with the data analysis. Indeed, I basically made it up as I went along, just as Kathy and Denny must have done, although I think they managed better than I did. I read Glaser and Strauss (1967) and was inspired by it, but I could not see from what the authors wrote in their book precisely what they did to generate theory from data—it seemed magical but extremely foggy. I experienced a similar sense of wonder and fogginess in reading several other published qualitative studies. The first editions of Miles and Huberman (1984) and Yin (1984) came out before I finished my thesis, but not early enough to influence me significantly. I mostly just battled away on my own—engaging in a form of methodological “bricolage” (Pratt et al., 2020b), just as many others were doing. For the purposes of the present essay, I looked back at the articles I published from my thesis, and I see that I only cited two methodological sources: Glaser and Strauss (1967) and a paper by Mintzberg (1979) in Administrative Science Quarterly arguing for a strategy of “direct research.” Wow! How did I ever get away with that?
I can still remember the moment when I first read Kathy Eisenhardt's (1989b) piece in Academy of Management Journal on fast decision making—it was like a light coming on—here was a study drawing on qualitative data where you could actually see how the author systematically reached their conclusions. I had a similar sense of enlightenment when I came across the Gioia et al. (1994) piece in Organization Science—Okay! Here was what rigorously grounded theorizing from a single case study might look like. Kathy Eisenhardt and Denny Gioia helped lift the fog. And they helped communicate to the somewhat skeptical audience in the top management journals what qualitative research could do. They made it clearer, and they also helped make it legitimate.
It was not just them. I also remember being blown away by Barley's (1986, 1990) elegantly designed, fine-grained, rigorous, and insightful work on radiology departments, and by Van de Ven's (1992) account of the methods he and his colleagues developed for the Minnesota innovation studies, as well as his articulation of process research more generally. Markus’ (1983) paper on the politics of information technology implementation where she tried out alternate theoretical narratives on the same case, adopting an approach inspired in part by Allison’s (1971) study of the Cuban missile crisis, was also an inspiration.
These articles were all exemplars for me, each in different ways. I incorporated them into my methods classes, and they also starred in my 1999 Academy of Management Review paper on “Strategies for Theorizing from Process Data” where I organized and assessed the different approaches I had discovered (and in some cases, experimented with) to analyzing qualitative data on organizational processes (Langley, 1999). That paper essentially captured a collection of seven different kinds of answers to the struggles I had personally encountered in analyzing qualitative data for my own thesis and in other work after that.
At that point, I called all the key articles I featured “exemplars,” and the seven approaches “strategies for theorizing.” But perhaps the seeds of what I call in this essay a “templating process” had already been sown. Specifically, I will label this the “Initiation Phase.” Templating begins as scholars develop a successful methodological approach (no doubt through a process of methodological bricolage similar to that described by Pratt et al., 2020b) that begins to be recognized and held up as exemplary by others. The original authors find the approach sufficiently satisfying to reuse it and elaborate it further in their own subsequent work as Kathy and Denny 8 did, so that over time, their methodological style becomes instantly recognizable and is applied to a variety of different topics, often in collaboration with students. In Kathy's case, a methodological article that explained the logic of her method of theorizing from cases had appeared in 1989 (Eisenhardt, 1989a), helping to articulate its contours in a clear manner that others might follow.
Which brings us to the “Diffusion Phase.” This happens as people beyond the original authors and their students begin to adopt, imitate, and recreate their own recognizably similar versions of the same methodological approach. The methods are appealing, precisely for the reasons I mentioned above: they show how qualitative research can be made systematic. They enable the production of enlightening and interesting findings, and journal reviewers and editors find them credible as well, so the styles become popular and more people start to notice this and use them, perhaps calling on the original authors to help them in developing their work, or perhaps not. So far, so good. This is all positive. The overall quality of qualitative research appears to be improving and qualitative research is becoming more legitimate overall, to the benefit of the field.
At some point, however, we enter what I will call the “Reification Phase.” And here, I would like to make a small confession, because I feel slightly implicated in what happened next, though perhaps this is self-delusion. In 2010, Don Bergh invited me to write a chapter for a methodological book series entitled Research Methods in Strategy and Management. Chahrazad Abdallah and I got together to do this. We had been noticing the increasing popularity of certain styles of qualitative research and thought we would like to document them, notably relating them to their different ontological and epistemological assumptions, as well as articulating the underlying logics of the methods themselves and their rhetorical development in published articles. A central challenge about doing qualitative research for me has always been the demand to deliver both novelty and credibility. If what you are showing is not new, why would you do qualitative research? But, if it really is new, it might be harder to believe (not credible). And if it is too easy to believe, maybe it is not particularly new. This challenge and the way it can be met by using different methodologies became an underlying thread in our chapter.
We included four methodological approaches in the chapter, each based on different ontological and epistemological assumptions, and two of which appeared much more codified than the others, those developed by Kathleen Eisenhardt and Denny Gioia and their students. And (aargh!!!) we labeled these “templates.” The other two were and still are much less formalized and codified (practice-based approaches and discursive approaches) and we called them “turns.” To develop the description of the “templates,” Chahrazad and I read everything that Kathy and Denny had written and published to that point, including the doctoral theses of their students. We also relied on Kathy and colleagues’ papers describing the multiple case study method (Eisenhardt, 1989a; Eisenhardt & Graebner, 2007), and I had a telephone conversation with Denny to ensure that we captured the essence of his approach. At that point, the Organizational Research Methods piece that clarified it had not appeared (Gioia et al., 2013).
To our surprise, the “Templates and Turns” chapter (Langley & Abdallah, 2011), though not particularly easy to find in library databases, became quite popular. However, we soon realized that despite our own caveats about the risks of relying too closely on templates, it was that section of the chapter that readers picked up on the most. Indeed, at one point, I became aware that there was an electronic version of the chapter circulating in which the sections on the “practice turn” and the “discursive turn” were missing altogether. Given this, I cannot help wondering whether our chapter contributed in small part to reifying or “performing” Kathy's and Denny's methods as “templates,” for better or worse. For better, because I believe that these methods offer brilliant ways to generate credible novelty in qualitative research as long as one adheres to the distinct ontological and epistemological assumptions that underpin them. Our chapter also helped to codify and compare different ways of doing this for students and junior scholars.
For worse, because “templating,” or reifying a method as a template (explicit in our chapter, but implicit in other articles, including even those by the authors themselves) (Eisenhardt, 1989a; Gehman et al., 2018; Gioia et al., 2013) tends to distill it, freeze it, and institutionalize it in potentially problematic ways. In particular, a reified template may become accepted not just as a useful inspiration, but as the “one best way,” regardless of whether it is suitable for the study at hand. Mees-Buss et al. (2020) reveal with their review of qualitative articles in top journals that this may have happened, in particular with Denny Gioia's methodology. Informal conversations and other evidence (e.g., Köhler et al., 2022) suggest that some have felt that this methodology was expected of them, or even imposed on them by editors and reviewers, sometimes against their better judgment. Denny himself deplores some of the ways in which his methods and tools (notably the data structure display) have been erroneously or awkwardly deployed, sometimes as a façade, decoupled from what the authors really did (Gioia et al., 2013; Gioia, this issue).
Which brings us to the final phase of the templating process that I label “Pushback and Adjustment,” which is where we are now. Many qualitative researchers have clearly been expressing frustration with efforts to pin down, codify, or constrain the methods they use in their studies by imposing formalized procedures and standards that are sometimes treated blindly as checklists (Jarzabkowski et al., 2021; Köhler et al., 2019; Pratt et al., 2020a, 2020b). Methodologies that have in some way been reified as “templates” have begun to receive critique and pushback because scholars have noted that the methods people choose might have a constraining effect on the types of theorizing likely to emerge from them (Cornelissen, 2017). I agree that methods and forms of theorizing are in part co-constitutive (Gehman et al., 2018; Langley, 1999) and, therefore, that the imposition and reification of any particular methodology as the “one best way” by reviewers and editors can have an unintended narrowing effect on our scholarship. Thus, I strongly support others who have drawn attention to novel ways of capturing and analyzing qualitative data (Corley et al., 2021; Lê & Schmid, 2020) that generate insight and that step beyond formalized templates. And I am a firm believer in methodological bricolage that sustains trustworthiness while generating insight and novelty (Pratt et al., 2020b), as I am sure most of us are, Kathy Eisenhardt and Denny Gioia included.
So, what is wrong? From my point of view, nothing much. It is much better that we return to a thoughtful approach to qualitative research, where authors, reviewers, and editors judge articles and methods on their own terms and according to their merits, without imposing arbitrary stipulations that one must (or must not) use one or other particular approach and methodology for data presentation and display. I hope and believe that the field is becoming more mature about these things. So, this is all healthy… as long as we do not throw the baby out with the bathwater, that is, abandon insights and useful ideas about rigor and attention to onto-epistemological assumptions we learned along the way. Unfortunately, one of the problems with the reification phase of templating, manifest to some degree in articles like ours (Langley & Abdallah, 2011), but also in the ways in which “templated” methods may be taken up by others, is that this process may inadvertently tend to oversimplify and “box in” a living, dynamic, and vibrant framework for thinking and analyzing qualitative data in a way that its original authors might see as caricatures of their original intent, potentially setting them up for critique. I understand that: to a far lesser degree, I have been “templated” too (Lerman et al., 2020). So, it is good from time to time to set the record straight as Denny is doing here (Gioia, this issue; see also Eisenhardt, 2021). For me, that is part of the ongoing adjustment process.
At the same time, I still think there is a good deal of value in studying others’ methods, so that I and others can learn from them, as well as identifying useful ways of generating credible and insightful novelty from qualitative research. So, I intend to keep on looking for exemplars, but I probably will not call them “templates” anymore.
Templates, Approaches, and Exemplars
Kathleen Eisenhardt
In 1989, I wrote “Building Theories from Case Study Research” to help new researchers (like myself) to get started with a novel method (Eisenhardt, 1989a, 1989b). I combined insights about cases (Yin, 1984) and grounded theory-building (Glaser & Strauss, 1967) with my own ideas like the critical role of theoretical arguments. Broadly, I conceptualized multi-case theory building as an alternative to other forms of theory building like analytic models and arm-chairing theorizing. This theory-building aim significantly shapes the core features of the so-called Eisenhardt method—that is, exploratory research questions, careful theoretical sampling to capture the focal phenomenon (while creating useful similarities and differences), strong theoretical development (like constructs, measures, and logical arguments that connect constructs), and constant cross-case comparison with replication logic (Eisenhardt, 2021).
The method became popular and my empirical papers and those of others (e.g., Jane Dutton, Connie Gersick) became exemplars as Ann Langley (this debate) writes. Over time, some researchers began to “templatize” my work. Now, although it might be flattering to have one's own method, it is also frustrating when that work is misrepresented, often substantially. These observers were perhaps understandably challenged to separate the method from what reviewers required (e.g., whether to include propositions) and from my own choices that were not inherent in the method (e.g., studying performance). Perhaps they were misled by its simplicity—for example, less “template” than they imagined! Regardless, these observers missed (often, stunningly) core features of the multi-case theory building method, a problem that besets grounded theorizing more broadly (Walsh et al., 2015). For some, there was a genuine misunderstanding. For others, the misrepresentations seemed sloppy and self-serving.
Ann Langley and Paula Jarzabkowski graciously offered Strategic Organization as a forum to “correct the record” (Eisenhardt, 2021). On one hand, the method has always been fundamentally about theory building; its few core features are implications of that aim. On the other hand, the method was never a rigid and detailed protocol. It was never about surface features such as types of data, number of cases, performance outcomes, variance theories, vertical knowledge building, and other misconceptions. Further, the method is evolving, not stuck in 1989. Although the essence of theory building from multiple cases remains, it is now more clearly ontologically and epistemologically flexible. There are many more articulated research question and design possibilities, much richer conceptions of time, and a broader understanding of the research enterprise including ties to simulation and formal modeling. I am particularly excited by links to machine learning—an intriguing pairing of complementary pattern recognition methods (Tidhar & Eisenhardt, 2020).
Of course, the debate at hand is whether templates add value. Denny and I (and others like Ann Langley and Kate Kellogg) have developed methodological approaches that clearly add value to the field. These approaches collectively reiterate the central messages of the importance of deep immersion in data and the tying of data to theory, even as they add unique insights for particular research agendas. Kate, for example, is the master of paired-comparison ethnographies. My approach has particular insights for researchers aiming to build theory using the replication logic of multiple cases.
These approaches collectively offer exemplars of successful studies—providing both guidance and legitimacy. Ann Langley, Davide Ravasi, and Claus Rerup (this debate) are right on—it can be effective to follow well-known exemplars at first in one's research, but then recombine their features (aka bricolage) and deviate. In my own reviewing and with PhD students, I observe that new researchers quickly improve by copying, but then develop their own blend—often varying from study to study (e.g., Cohen et al., 2019; McDonald & Gao, 2019; Ozcan & Hannah, 2020; Pache & Santos, 2013), as my own work does.
Finally, Denny, Ann, and I (and others) have created useful concepts such as data structures, temporal bracketing, and racing designs. These add to the rich portfolio of tools that researchers can deploy as relevant in their unique studies. That said, gaining the benefits of these and other well-known methodological approaches assumes that researchers understand them correctly and apply them appropriately—assumptions that are not always met.
Overall, the “template war” is more a “tempest in a teapot”—often stirred up to create strawmen. No one is arguing for rigid and complicated templates, particularly templates that are misrepresentations! As Jane Le, Torsten Schmid, and Kevin Corley (this debate) suggest, let's move past self-serving and often inaccurate criticism. Let's collectively ensure that authors and others actually understand well-known methodological approaches and their contingencies. As Karen Locke, Martha Feldman, and Karen Golden-Biddle (this debate) note, let's better understand the creativity of theorizing. From the lens of multi-case theory building, it would be fabulous if researchers became more deliberate about boundary conditions, alternative explanations, and theoretical arguments (i.e., the “whys”). No more “spaghetti” diagrams without underlying logic! Overall, it is time for the adjustment phase; it is time to “grow up.”
Yes—There Is an Alternative to the Gioia Methodology! Reconnecting with the Hermeneutic Tradition
Jacqueline Mees-Buss, Rebecca Piekkari, and Catherine Welch 9
Let us start this rejoinder with our points of agreement with Denny Gioia in his “Coming Out Fighting” essay. We agree that organizational worlds are socially constructed that the people who inhabit them are knowledgeable and that researchers need to engage with the philosophy of science rather than become mindless method followers. We also agree that it is necessary to show evidence for the theoretical claims we make, regardless of our deeper ontological and epistemological differences and that the primary (although not the only) aim of qualitative research is to present plausible explanations of social phenomena.
But we disagree on a fundamental point, namely the nature of social reality and how it can be interpreted. Interpretation is never a neutral act—either on the part of the researcher or on the part of those whom we study. This makes the interpretive process far more challenging and complex than what the “Gioia methodology” 10 suggests. Engaging with these challenges is what ultimately determines the quality of the interpretation: the depth of questioning what is going on in the field, questioning our own interpretations as researchers and questioning our conclusions before accepting them. As International Business scholars, we are perhaps more sensitized to these challenges: the difficulties of interpretation are highly visible in cross-border fieldwork. 11
As Denny emphasizes, the Gioia methodology is based on the assumption that theoretical insights are grounded in the data, in the accounts of our informants. It proposes that “interpretive research” amounts to the faithful reporting of what informants choose to tell us (their “interpretation”), systematic procedures of data management, and theoretical abstraction. 12 Through these procedures, a tight correspondence is maintained between what interviewees say and the abstract themes that the researcher then proposes. Anything else, Denny argues not only jeopardizes the quality of the study but also privileges the researchers’ interpretation over that of their informants. But from our perspective, insisting on this tight correspondence ignores the unexpressed factors (e.g., power games, cultural practices, ideological differences, taken-for-grantedness) that are necessary to understand what is going on. It also privileges interviewees’ verbal accounts at the expense of other data sources, such as observational material (Van Maanen, personal communication). By suggesting that the researcher's interpretive task is not about strict adherence to interviewee accounts, we do not give primacy to the researcher over the participants. Rather, in doing so we acknowledge the partial, contingent, and thus fallible nature of any act of interpretation—including our own as researchers. Recognizing that accounts from the field do not offer us direct access to what is really going on, is not arrogant, as Denny suggests, but rather a necessary reminder of the limits of our understanding.
The origins of Denny's position, including his insistence on “faithful reporting,” can be traced to a qualitative tradition that its pioneers labeled as “naturalism” (e.g., Lincoln & Guba, 1985) 13 . Our position, however, is informed by the hermeneutic tradition. The two orientations—despite both being called interpretive approaches to qualitative research—are fundamentally different, as the current debate demonstrates. In our Organizational Research Methods (ORM) paper, we argue that the hermeneutic approach is better able to deal with the interpreted nature of the social world. The hermeneutic tradition challenges the researcher to question the clues from the field, rather than take them at face value. Even if informants are candid and open, researchers can only begin to understand their responses by placing them into their broader social webs of meaning: deep-seated belief systems, knowledge, practices, institutions, and power structures. Making this a part of the interpretive task is a recognition of the rich layers of human experience that affect meaning-making and does not reduce either the respondents or the researchers to “cultural dopes,” as Denny puts it.
In our ORM paper, we juxtapose the core concepts of the Gioia methodology with those of Van Maanen’s (1979) approach to interpretation (see Mees-Buss et al., 2020, Table 1). Although both use the concepts of first- and second order (with Van Maanen the source for Gioia et al., 2013), they attach very different meanings to these concepts. In the Gioia methodology, first-order (informant-based) and second-order (researcher-based) codings are clearly separable, as per the visualization of the data structure. For Van Maanen, this separation is not tenable, because the “facts” produced from fieldwork seldom speak for themselves. First- and second order both involve the researcher's interpretations: first, deciding what is really going on here? And second, how do we explain it? These questions cannot be answered independently; “they interpenetrate one another” (Van Maanen, personal correspondence). Although Gioia and his coauthors see the interpretive task as a systematic process of data collection and analysis progressing along with an inductive path of theorizing, Van Maanen considers the investigative task as a forensic process of discovery that follows an abductive 14 path. By juxtaposing these contrasting usages of first-order and second-order concepts, we expose very different underlying assumptions glossed over by the shared vocabulary and untidy borrowing of terminology.
By purposefully selecting the research site and formulating interview questions, researchers are already influencing informants’ accounts. In this regard, data are never “raw” (Mishler, 1986) but rather “cooked,” because the researchers’ fingerprints are everywhere, from the very beginning (Gitelman, 2013). We are by no means the first to challenge the naturalists’ ideal of objective “interview data” (e.g., in management, Locke et al., 2008; Silverman, 1989; Van Maanen, 1979; in psychology and sociology Potter & Hepburn, 2007). Assuming that informants’ “authentic accounts” of their experiences offer a straightforward, reliable, and complete guide to understanding their full meaning, as does the Gioia methodology, has been widely (and we think rightly) critiqued as naive (Silverman, 1989, 2014).
Although entertaining, the link Denny Gioia makes between a hermeneutic approach and the Greek god Hermes, portrayed as a liar and trickster who delights in puzzling rather than enlightening his audience, is misleading. As we have already indicated, the primary contribution of the hermeneutic tradition (derived from the Greek world hermeneutic: “to interpret”) is its emphasis on the subjectivity challenge: subjective researchers dealing with subjective data. How this is done is an ongoing debate to which we have contributed by highlighting the importance of heuristics. By this, we do not mean heuristics as “rules of thumb” or mental shortcuts, but heuristics as thought patterns and probing questions that guide and deepen the interpretive process. Needless to say, we do not subscribe to Denny's conclusion that quality in qualitative research is about distilling the “raw” data into “simplified theoretical models.” Instead, we believe that qualitative research is about opening the data up to different interpretations in search of a glimpse of the “lifeworld” to which they pertain. From a hermeneutic perspective, the theory produced from this process is provisional; socially accepted truth until better truths are found.
We believe a debate about the Gioia methodology is warranted. As our review of qualitative studies published in top management journals confirmed (Mees-Buss et al., 2020), it has become the dominant way of representing data analysis in management scholarship. This has reached the point that qualitative researchers are often advised to follow the Gioia methodology if they want to get published. We argue that this is limiting and thus detrimental to the vibrancy and development of our field. The use of the Gioia methodology as a template is concerning when it is faithfully followed without awareness or questioning of its underlying assumptions. We have surfaced these assumptions and have demonstrated that alternatives are available.
This debate is important in moving our discipline forward and enabling it to reconnect with the hermeneutic tradition. When first introduced, the Gioia methodology offered an interpretation of interpretive qualitative research that was palatable to the mainstream—that is, neopositivist reviewers at the time. This was progress. But in the end, this methodology does not offer its followers much more than an abstracted “theorized” version of what informants tell them. Knowing what we know today, post Popper, Kuhn, Garfinkel, Foucault, Wittgenstein, and so many others who changed the way we understand the social world, the only way forward is daring to challenge what we thought was the truth. Wittgenstein produced two major philosophical works in his life; the second (1953) dismantling the first (1922). “You can't think decently if you don't want to hurt yourself,” he later claimed (Ludwig Wittgenstein in Malcolm 1958, p. 40). Paradigm shifts occur by challenging our prior assumptions; deconstructing and reconstructing what was handed to us. Contradicting prior knowledge is not “absurd”—it is the (only!) way forward.
Who Is the Dope? A Response to Gioia
David Silverman
Gioia argues that we should “treat organization members as knowledgeable and, therefore … adequately represent their experience [and] show evidence in support of our assertions.” I am in full agreement with this. But it leaves a question hanging: How can we adequately represent members’ experience? Although Gioia calls his approach to qualitative research a “systematic methodological approach,” I think we can reasonably ask: Where is his methodology? Nowhere in his essay does he tell us what resources and methods we should use to discover “experience.” In this respect, he is like users of social media and talk-show hosts who act as if they have direct access to “experience.” Since Gioia doesn't help us here, let us look at the predominant qualitative methodologies in use in the light of the following questions:
Q: How do most qualitative researchers access experience? A: Through open-ended interviews carried out by “empathetic” interviewers. Q: How are these experiences usually reported? A: By the researcher depicting “themes” in what interviewees say [Thematic Analysis] and/or by claiming to have unmediated access to the interviewee's “experience” [Interpretative Phenomenological Analysis]. Q: What does this leave out? A: How interviewees assemble versions of their experiences while skillfully managing the implications of what they say for the moral standing of both themselves and of the interviewer. “ … are deeply and unavoidably implicated in creating the meanings that ostensibly reside within individual experience. Meaning is not merely directly elicited by skillful questioning, nor is it simply transported through truthful replies; it is strategically assembled in the interview process” (Holstein & Gubrium, 2016, p. 69).
This means that what Gioia glosses as a “systematic methodology”‘ would need to address what Holstein and Gubrium refer to as “the circumstances of narrative production.” These circumstances:
Moreover, to proceed on the basis that interviewees have “states of mind” able to be revealed by the skillful interviewer derives from the “psy” disciplines and also survey research. Both depend upon the everyday assumption that each of our actions is linked to a mental process (e.g., a “perception,” an “intention,” or a “motive”). As Potter and Hepburn point out, in interviews: “People (are) asked about what they do and what they think, and they helpfully tell you about these things. However, looked at another way, what is going on here is that people are being treated as being [in] a special epistemic position with respect to their own conduct. And not just with respect to actions and events, but causal and developmental relationships, intra-psychic processes and so on. The interview is dependent on a range of ambitious cognitive judgments and feats of memory and analysis” (2012, p. 567).
Contrary to most qualitative interviewers, this means that: “we … need to be cautious when treating (interview) talk as a way of referring to inner psychological objects of some kind” (Potter & Hepburn, 2012, p. 567). Am I implying that we drop any interest in studying experience? On the contrary, I suggest that by all means study “experience,” but not as some unmediated mental category. Instead, look at how members warrant an experience, perhaps by appealing to particular identities (mother, expert etc.) or to an appropriate search procedure that allows them to discount another version of what happened (see Sacks, 1984). A useful exercise is to study occasions where interviewees downplay their right to “own” an experience—for example, when explaining why they have not so much to say about a given topic. An extreme case of downplaying such ownership is seen in family therapy sessions where clients are often asked about the experience of a partner or close relative (see Peräkylä & Silverman, 1991). My argument provides an answer to Gioia's justified demand that we should “treat organization members as knowledgeable.” Contrary to the practice of many qualitative researchers, it means paying attention to the skills of participants in fashioning their accounts and avoiding tropes such as the “empathetic interviewer” and “skillful thematic analysis.”
Finally, what does this make of Gioia's characterization of my position? He writes: “Silverman (2017, 2020) has … been concerned about the pitfalls of taking informant accounts as accurate explanations. He would have us grant precedence to a researcher's account as somehow more omniscient and credible than those of knowledgeable informants, arguing that researchers have a different way of seeing that informants themselves cannot see.”
Gioia's use of the term “informant” is suggestive. Like so much interview research, it reduces the interviewee to a repository of information, motives, and experiences all of which can be discovered by the skillful, trained interviewer. It also ignores qualitative research which discovers participants’ practices in the analysis of naturalistic data (documents, digital data, visual data, etc.).
Sadly, Gioia has stood my argument on its head. He claims that I am downplaying informants’ accounts. As opposed to this, I have shown how most interview research makes “informants” into dopes (Silverman, 2017). By contrast, I argue that naive analysts of interview data are the real dopes because they fail to address how our interviewees, without any training, build, second-by-second, beautifully designed accounts to manage the moral implications of the talk of themselves and the interviewer. Once we attend to the narrative skills of our interviewees, we can start to construct the “systematic methodological approach” absent from Gioia's argument.
Hey—We’re All Friends Here! 15
Jane Lê and Torsten Schmid
Denny Gioia offers a tough takedown of the Organizational Research Methods Special Research Forum on Qualitative Templates. He provides a firm critique of many of the papers featured in the special issue, including our own. In our response to his piece, we build on and challenge his main proposition—that a focus on innovation in methods might distract scholars from theory development—in three important ways. First, we assert that a reflexive use of research methods is important for quality research. Second, we show that, when properly applied, innovation of research methods can produce insightful theoretical findings. Third, we posit that although qualitative methods have significantly increased in rigor in strategic management research, there has been less emphasis on methodological innovation to support field development. In this response, we aim to develop constructive debate; we believe such debate—just like innovation—can be generative for the field of organizational research methods.
First, and firmly, we reject the strawman argument that Denny builds in his paper. In working to foster debate, he constructs opposition and disagreement in areas where we actually agree more than we disagree. For instance, like Gioia himself, we strongly support the reflexive or “non-templated” application of research methods. Indeed, we argue that much research practice in strategy and management is based on flexible “designs-in-use” (Lê & Schmid, 2019). In that paper, we reviewed twelve common designs-in-use applied in the field, preempting others’ calls for additional exemplars (Langley, this issue) or templates (Ravasi, this issue). As such, we already know that scholars combine technical mastery 16 of methods with their creative crafting 17 to develop research designs that are both rigorous and generative. Hence, much like Denny, we are critical of scholars who treat the “Gioia methodology”—or any other design-in-use for that matter—“as a kind of cookbook to dress up their research reporting to make it look more rigorous than it actually was” (Gioia, this issue). We agree that, used appropriately, the Gioia methodology will continue to make important contributions, generating both interesting theoretical findings and legitimacy for qualitative, interpretive work.
Secondly, and here there might be some disagreement, we believe that it is not just content that drives innovation but also the tools and methods used by the innovators. In fact, new tools and methods often lead to new discoveries. Consider, for instance, the Renaissance artists Brunelleschi and Alberti, who introduced proportion and linear perspective into their artworks to create more realistic art and stronger structures. This innovation led to new standards of excellence and—equally importantly—also drew new topics that were central to the Enlightenment into the social conversation (MacIntery, 2007). In strategy and management, we see similar patterns: Innovation might fail, not because of the content of the innovation, but because of the tools that are put into place to implement the innovation (e.g., Christensen et al., 2008). We argue that the same is true in organizational research methods. Certainly, the weight of the evidence (Lê & Schmid, 2019) contradicts Gioia's claim that scholars might be “searching in the wrong place” (Gioia, this issue) if they aim for methods of innovation. To ensure that innovation remains generative, we posit four key principles of innovating methods: (1) holistic innovation, (2) clarity of methods, (3) codeveloping theory and method, and (4) reflexivity (see Lê & Schmid, 2020).
The principle of holistic innovation acknowledges the entwined nature of research design, whereby changes in any part of the research process are likely to have implications for all parts of the research process. The onus is thus on the researchers to ensure that research designs always remain aligned and internally consistent. The principle of methods clarity reminds scholars of their responsibility to make their methods transparent to others. In particular, when innovating research methods, it is crucial that readers, reviewers, and editors understand in some detail what was done within a particular research study. Without such clarity, depth and detail, it is challenging, if not impossible, to assess the quality and value of a research study. The principle of co-developing theory and method makes salient to researchers that methods are the means to an end and that the aim must always be to generate novel theoretical insights. As such, theory and methods must forever be in interplay at all stages of the research process. The principle of reflexivity reminds us that—even in innovation—we need to preserve some of the elements of the original method. In particular, we must understand the original motivation and use of a method and its underlying assumptions if we want to employ and adapt the method in a skilled way. Thus, like Gioia in his essay, we ask “researchers to show consideration for appropriate ontological and epistemological assumptions” in their research methods designs. More broadly speaking, we also encourage our peers to actively reflect on and be reflexive about their own research design—sharing ideas, engaging in conversation, sparring with others, etc. Indeed, this paper and this debate might be considered an example of such an exchange! We believe that these principles collectively safeguard innovation in research methods from becoming an end unto itself, where one values method innovation above all else. Rather, it encourages purposeful innovation, that is, innovation with the specific purpose of generating novel theoretical insight.
Thirdly, although qualitative methods have increased in rigor within strategy and management research, we have not seen an equal emphasis on their creative use to support field progress. One of Gioia's greatest contributions is that he—adapting the skills and methods of a trained engineer to a different context—reinvented grounded theory in a way that allows the qualitatively rigorous application of his methodology in strategy and management (Gioia et al., 2013). His efforts—alongside compatriots such as Kathleen Eisenhardt, Ann Langley, and Stephen Barley—have enabled nonqualitative colleagues to understand and appreciate not only the method but also qualitative research more broadly. This generated much-needed legitimacy for a fledgling field. Many scholars, including us, have applied the “Gioia coding tree” because it organizes and presents our messy qualitative analyses in a structured way that enables us to communicate it effectively to others. Yet, this rise in legitimacy and popularity comes at a cost. For one, there are some researchers who apply the methodology inappropriately, violating its core principles and traditions in the process. Secondly, and this is our gravest concern, there is a risk of standardization—whereby “the Gioia methodology” becomes seen by our positivist colleagues or less-experienced interpretivists as the only legitimate way to do qualitative research, eclipsing all other methods in the process. We find this untenable because we believe that it would be gravely detrimental to the field. We thus feel a responsibility to remind our fellow scholars that rigor and creative use of methods are important for vibrant scientific inquiry. The Gioia methodology is popular precisely because it was so wonderfully revolutionary. The open question remains who is going to be the next revolutionary! We hope that they are reading this very essay, finding new motivation and enthusiasm to continue innovating research methods and doing really interesting research.
To Template or Not to Template? That Is NOT the Question
Karen Locke, Martha S. Feldman, and Karen Golden-Biddle
Asked to add our voices to a “debate” on templates in qualitative research, that is, conducting research by “copying” existing forms or patterns, we wondered, “A debate! Really?.” Qualitative research practice in our field is characterized by multiplicity: in research traditions, in methodologies, and in analytic procedures (Denzin & Lincoln, 2018; Easterby-Smith et al., 2008; Gephart, 2004; Patton 2015; Pratt et al., 2022). Broad agreements exist within this multiplicity; for instance, that the perspectives and understandings of organizational actors are necessarily to be accounted for in our theorizing; that there should be a good fit between presented observations and how they are theorized (Golden-Biddle & Locke, 2007), and so on. With respect to templates, the qualitative community is generally in agreement that templates are neither inherently good nor bad; it is how they are used in research practice that is consequential. We have seen instances in which reviewers and editors, who might not be qualitative researchers, acted as if following a template was necessary for producing rigorous research. The scholars whose work has been most frequently templated, however, have argued that “treating it [the methodology] as a ‘formula’” is a “trend” that is “something of a concern” (Gioia et al., 2013, pp. 25–26) and efforts to encourage rigor through templates can lead to rigor mortis (Eisenhardt et al., 2016).
So, with much settled in qualitative research, including whether the following templates are inherently good or bad, what is a more meaningful concern? It's how creative theorizing occurs through analysis (Locke, 2011; Locke et al., 2008) where, for instance, experiences of creative leaping (Klag & Langley, 2013) or arriving at Shazzam moments (Gioia et al., 2013) are recognized as important and under articulated. As Silverman succinctly argued, “analysis is always the name of the game” (2007, p. 61). In a series of papers, we have documented and theorized how the analytical process fosters creativity through close connection with data by prompting ongoing highly contextualized adjustments and rethinkings of what we are finding and what we next should do. We know that as we execute analytic procedures, our studies often take unexpected turns and meanings shift—at least in the studies that end up being surprising and insightful, even shazzam-y (Fine & Deegan, 1996; Locke et al., 2008). “How does this happen?” is the question we need to ask rather than whether to follow a template.
In pursuit of this question, we have theorized the lived experience of doubt as helpful for researchers to generate creative insights (Locke et al., 2008). Of course, doubt can be paralyzing, but it can also be freeing. In concrete terms, making doubt generative entails developing sensitivity to being pulled up short by a round of analytic work and responding to these moments by engaging the new possibilities this lived experience offers. In this process strategic principles (e.g., embracing not knowing, disrupting the order, nurturing hunches), all of which are both permissive and guiding, are more productive than definitory rules because they can be applied flexibly to enliven our interest in and engagement with our research.
Recognizing that analysis can take many forms (e.g., Feldman, 1995; Langley, 1999), we explicate how one commonly used analytic technique, coding, and contributes to creative theorizing. The process of live coding as we describe and illustrate it (Locke et al., 2015) underscores that coding is theoretically productive when it generates dynamic shifts in our orientation to the data that allow us to consider and interact with further observations and ideas. It is a lively process in which data, codes, and researcher(s) work on one another to produce new insights that are thoroughly grounded in the data (including the emic perspectives in the field site). We illustrate this through several studies. For instance, we show how coding achieves a series of shifts in Gersick's famous study of midpoints, starting from what makes groups effective and ending with how midpoints organize a group's activities. Rather than a follow-on to finished codes, productive theorizing occurs through coding's dynamism.
Most recently, we have delved into the relationship between iterating analytical procedures and creativity in theorizing (Locke et al., 2020). Iterating is “the repeated application of analytic actions oriented toward theoretical progression. In practice, iteration often takes place through the active work of pursuing the questions and noticings that arise in and from this analytic work with yet more analytic actions” (p. 263). Iterating is well recognized as important to producing insightful and compelling theorizing. By identifying significant heterogeneity in the arrangement and sequencing of coding procedures, we show how analytic iterating both creates and follows the logic of the project, whether deductive or inductive. Explicating iteration's internal logic both extends our argument about analysis as a lived experience that plays out within a research project and lays the groundwork for future work on the unfolding experience of theoretically productive analyses.
A question lurking behind the notion of the “debate” is whence rigor? Our work proposes that rigor is an outcome of the internally consistent layering of analytical work that produces adjustments and rethinkings central to grounded insightful theorizing. Working through a project thus entails multiple runs, even restarts, of analytical work to shift our research project to new insights. From this view, rigor is not determined by following models (e.g., paradigms, template methodologies, exemplars) external to a given study. Individual research projects may draw on them as inputs to what might be a next useful analytic step. But, to creatively progress the research, a researcher must understand why and how a next step makes sense in the internal logic of the project.
Lighting a Fire: Understanding and Doing Qualitative Research
Claus Rerup
I am guilty! The “Gioia methodology” has been a generative source of inspiration in some of my work (Rerup & Feldman, 2011; Strike & Rerup, 2016). In these papers, the use of this methodology involved a great deal of creativity. We did not turn the methodology into an iron cage consisting of rigid procedures. Instead, the systematic grounded theory procedures provided a foundation for moving our research forward and communicating with editors and reviewers. In each paper, my coauthor and I combined the Gioia methodology with procedures and guidelines developed by other qualitative scholars. The methodology was not a magic bullet that automatically fixed thorny issues and generated easy publications. Each of those papers went through several major revisions—one went through six major revisions with new reviewers being added. There was no quick fix, template, or cookbook—only persistent and mindful trial and error.
Just as the literature on theory building can leave a reader confused about how to write a paper that contains a strong theory (Sutton & Staw, 1995), the literature on qualitative methods can leave a reader confused about how to write a paper based on strong methods. Today there seems to be little agreement about whether the use of the Gioia methodology reduces or enhances theorizing (witness the articles in the ORM special issue or the essays in this debate). Clearly, the methodology has been helpful in crafting and publishing some of my work. Yet, other approaches have also been helpful, perhaps because I don't consider the Gioia methodology or for that matter any qualitative methodology a “template.”
The template discussion in the field of qualitative research mirrors a prior discussion in the strategy literature about the replication of knowledge (Rerup, 2004; Winter & Szulanski, 2001). A key insight from that discussion is that knowledge (even in the form of templates) is hard to replicate. Knowledge is eventful, situated, imperfect, fraught with failure, etc. The replication process can be seen as an exchange between two agents: A, the source, and B, the receiver or user of knowledge. It assumes that A can teach B how to use the knowledge (directly or vicariously) and that B is motivated to use it. Yet, these assumptions often fall apart in the journal review process where agents beyond A and B are involved (e.g., editors and reviewers).
Anyone who has done qualitative research is aware that insight does not emerge when methods are replicated mindlessly. I learned that lesson as a first-year doctoral student in Denmark. I had submitted a paper for review in a North American management journal. I wrote the paper carefully (not mindlessly), following accepted grounded theory principles in the methods section in several qualitative papers published in that and other journals. I was not copying a particular method, but drawing on what appeared to be “best practices.” In response, the editorial decision letter said something like: “This paper is either looking for a fire or a miracle,” which was pretty upsetting. Initially, I resented the time I had invested in the paper. Later, the rejection became a spark that lit a fire within me. Rather than give up I wanted to learn how to do qualitative research. I attended workshops, read the literature, and practiced to get better. Denny Gioia also shared his knowledge with me.
As a masters student transitioning into the PhD program, I attended a PhD course delivered many years ago by Denny at Copenhagen Business School about cognition in organizations. It was not a methods course, but we also talked about what has come to be labeled as the Gioia methodology. Realizing how hard it was to do qualitative research, I engaged in many conversations with Denny over the years about doing qualitative research. During these conversations, he never spoke of “his way” of doing qualitative research as a cookbook or some sort of template. Instead, he was dismayed about how his work was being copied, replicated, misapplied, and oversimplified. Over many iterations, our conversations covered grounded theory, coding, and representing data-to-theory connections in novel, interesting, and informative ways. My memories of those conversations are well summarized in this statement: “I really don't care how researchers show their evidence, so long as they show it, rather than just telling the reader about it” (Gioia, this issue). That was the spirit. Denny made it clear that I did not have to do it his way. His focus was on clarifying a standard for providing evidence of what the people in the context under study were thinking, feeling, saying, and doing.
Working with Denny (and later Kevin Corley) for more than a decade on a paper that has gone through multiple rejections and eight revisions in several journals has exemplified the standards and approach Denny brings to qualitative research. In crafting this paper, we have used a wide variety of qualitative methods and have gone beyond the Gioia methodology. In fact, we invented our own methods that don't look much like those he has used before to advance our understanding of intangible processes involved in organizational identity transitions and to investigate an underexplored realm of sensemaking. It has been quite a ride, but also a wonderful learning experience for all of us.
There are several lessons available from my experience. First, mindless replication of any methodological template seldom produces good work. The Gioia methodology was not developed to be used as a template. The methodology has been helpful in developing some of my own work perhaps because we did not use it as a template, but instead as a systematic set of grounded theory procedures open to mindful application. Second, my qualitative research skills have improved by learning from people such as Gioia, Eisenhardt, Barley, Bechky, Langley, Pratt, Golden-Biddle, Locke, Pentland, Feldman, etc., who all developed particular methods to analyze their data. My simple insight from that process is that it is wise for anyone using other scholars’ recommendations to ask whether that person has faced all the contingencies associated with my particular context when s/he used a particular method. Generative doubt is essential in all processes of knowledge generation and transfer and can lead to further exploration and updating of the principles behind a particular methodological approach. Third, there are elements of art, craft, and eureka moments involved in qualitative research. When we remove those elements, most methods collapse and the result is uninteresting research.
Common Misconceptions About Templates: Why We Need More, Not Fewer
Davide Ravasi
I will take an unpopular position in this commentary. (Come on, it would not be fun otherwise!). I argue that we need more templates, not fewer. This is not because I disagree with the common criticisms of templates. I simply come to different conclusions about what to do about it. I agree that templates can induce researchers to replace reasoning with “proceduralism” (Harley & Cornelissen, 2020), mislead novice researchers about the complexity of qualitative research (Pratt et al., 2022) and, if applied (or enforced) mindlessly, they might stifle methodological pluralism and innovation (Cornelissen, 2017; Pratt et al., 2022). And that is precisely why I believe we need to encourage a more constructive debate around common and emerging templates.
Rather than demonizing templates (ok, maybe demonizing is too strong, but you get the idea) and discouraging young scholars from using them, I would rather see us recognizing the benefits of using templates and shifting the discussion from “Are templates good or bad?” to “How can templates be used more effectively—productively, generatively—in qualitative research?” Rather than debating whether the Eisenhardt Method or the Gioia Methodology is good or bad, I would rather see us striving to sharpen our understanding of when, how, under what conditions, for what type of data, etc. these approaches might be more appropriate.
I have used Denny Gioia's methodology for at least seven papers. I have probably used it in a way that Denny might consider improper in another two and in two more I have combined it with other analytical steps, to adapt to the complexity of the study. I don't think I have ever applied this methodology exactly in the same way twice (actually, I don't even think Denny applied his own methodology exactly in the same way twice!). Yet, I always had this template in mind—its fundamental assumptions and its approach to data analysis—when I analyzed data. When I read about ways of doing research that is presented as an alternative to templates (e.g., Harley & Cornelissen, 2020; Pratt et al., 2022), then, I am puzzled, because I not only agree, but it seems to me that they capture well how I approach qualitative research. Yet, at the same time, I also believe I am applying a template.
Part of the problem with templates, I suspect lies in interpreting and describing them as “standardized protocols”—as inflexible rules, to be followed strictly. They are not. Denny Gioia has been very clear about his methodology not being a “cookbook” (Gioia et al., 2013) and Eisenhardt (2021) recently discussed a whole array of variations to her original guidelines. Either these two methodologies—contrary to what we have assumed so far—are not templates, or templates are actually more flexible than they are commonly portrayed to be. Being “systematic”—a word Denny frequently uses when describing his methodology (Gioia et al., 2013; Gioia, 2019)—does not mean being “standardized”; it means being disciplined, careful, and methodical. It does not mean mindlessly following rigid prescriptions.
I also suspect that many of the problems people lament—scholars using templates incorrectly, reviewers and editors imposing templates inappropriately (Köhler et al., 2022)—also depend on partial or incomplete understandings of these templates, rather than on the templates themselves. Templates can be roughly seen as a combination of fundamental assumptions (ontological and epistemological), methodological guidelines, and visible artifacts (such as Gioia's data structure, data tables, etc.). Conceiving of templates as blueprints for data reporting, rather than guidelines for data analysis, however, can lead those who are less familiar with these methodologies to focus on their most visible features and overlook the guidance that they offer about how to produce an outcome. Here are, for instance, a few common misconceptions I have noticed over the years:
Tables are just for data display: Absolutely not! Tables are not simple repositories of evidence (Cloutier & Ravasi, 2021). They are not something you do at the end to please reviewers, by searching quotes here and there to claim support for a model developed intuitively. The tables I submit are streamlined versions of much richer tables I have been using until then, which contain all the relevant evidence (and can be very long…). It is by making tables—by cutting and pasting, moving things around—that I really immerse myself in the data and make sure that, however, I make sense of them, they are really grounded in the available evidence. Denny often repeats “You got no data structure, you got nothing.” For me, it is “If we do not have a table, we do not have a paper”.
All you need is a data structure: A data structure is important, but it is a mistake to assume that a data structure capturing informants’ accounts will be sufficient to support your analysis. Sometimes, these accounts become meaningful only in light of other analytical steps. This is where the Gioia methodology becomes the backbone of a more elaborate analytical journey that includes the careful tracking of actions, events, discourse, etc. Other tables and figures might be needed to capture the outcome of these additional steps, and/or add a clear temporal dimension to the analysis.
Building a model is an intuitive act: Yes, I know, there will always be a “conceptual leap” (Klag & Langley, 2013) when you eventually arrange your codes into a grounded model. But Denny Gioia's (2004) description of this step as a “Shazam!” might have given the impression that there is some sort of magic involved. If we accept the idea that a theory is composed of a set of interrelated concepts and an explanation for these interrelations, then I found this template helpful to gradually build a grounded model. I usually iterate writing definitions of second-order codes (i.e., concepts) directly into my working tables (to gradually articulate the building blocks of the model), sketch tentative visual representations of linkages among these concepts and systematically write “chunks” of theory explicating these linkages directly in the working tables or next to them. Then, I eventually merge these explanations into a first draft of the theorization of the emerging model. It's not magic; it's a method.
If templates “codify best practices and conventions for a particular qualitative method” (Harley & Cornelissen, 2020), then the real problem behind their inappropriate use might actually be that we are still at an early stage of codification of the tacit knowledge developed by early users (even for the most common templates). If so, rather than trying to do away with templates altogether, we might instead want to support this codification process by “talking more and teaching better”—encouraging richer and broader conversations around best practices and creating more opportunities for knowledge sharing—to help more people reap their benefits, while minimizing their shortcomings.
Scholarly Debates Are All Well and Good, But to What End? Moving Beyond Debate Toward Generative Action
Kevin Corley
I love a good debate and believe that an open sharing and discussion of seemingly incompatible ideas is what drives innovativeness and social renewal. As Jesse Jackson once noted, “Deliberation and debate is the way you stir the soul of our democracy.” But one of the things that can happen in scholarly debates is the participants end up talking past each other—believing their work has been misunderstood when neither party has really understood the perspective offered by the other. It appears that is happening in this case as well, as participants on both sides of this issue claim to be misunderstood when they are guilty of the same offense. In that sense, all good things must come to an end, even a good scholarly debate, and I think we have reached that point with qualitative methodology templates.
Between this JMI curated debate and the Organizational Research Methods special issue (2022, Vol. 25, Issue 2), we have now expended a great deal of time and ink on what appears more and more to be an arcane scholarly issue. What do I mean by this? If I asked any of the authors from the ORM and JMI issues whether they agreed with the statement, “There is no one best way to gather or analyze qualitative data for inductive/abductive purposes,” I suspect that every one of them would agree. Further, if I gave Denny truth serum and asked if there are ways to appropriately capture informants’ experiences of a phenomenon outside using the Gioia methodology, he would agree (I know, what a terrible waste of truth serum on someone like Denny). If I gave truth serum to Silverman, Mees-Buss et al., or even Pratt et al., or Kohler et al., and asked them if they use a systematic approach to collecting and/or analyzing qualitative data (i.e., a template) could be appropriate under some circumstances, they would all agree. My point is that it is all well and good to have scholarly discussions like this and I hope ours spurs more of our colleagues to consider how they should mindfully approach a qualitative study, given their specific interests. Ultimately, however, we are all dedicated to the same noble pursuit: How do we best build knowledge and understanding about the phenomena that affect organizations and that their members experience?
So, while I applaud all these authors for their passion and interest in arguing for how best to conduct ourselves methodologically, it would be misguided of us to have this debate ultimately undermine the legitimacy of qualitative research or the insights produced through qualitative research (via templates or otherwise). This means we must find a way forward and move beyond debating each other, into a generative community supportive of the multitude of ways (both systematic and idiosyncratic) that we can build new theoretical insights using qualitative data. Thus, I am humbly suggesting an addition to Langley's brilliant phases of template development (this issue) by recommending that it is time to move beyond the “Pushback and Adjustment” phase and into a phase I’ll call “Appreciation and Propagation.” That is, we need to redirect the time and energy that is going into debates like this one toward increasing the field's appreciation of the many ways good qualitative research can be done, as well as propagating the quantity of high-quality approaches to qualitative research that we see published in our journals. There are two obvious activities I see as necessary for us to transition into this (I hope last) phase.
First, experienced qualitative scholars need to (re)dedicate themselves to providing more education opportunities for inexperienced researchers (whether they be PhD students, junior scholars, or senior scholars) interested in pursuing a phenomenon via inductive/abductive methods using qualitative data. As a PhD student from an era when Denny was still honing his approach, it was invaluable for me to have a foundational set of guidelines from which I could learn how to conduct and present inductive analyses. As a starting point for learning how to do inductive and abductive research, what we now call templates are very useful. But I also appreciated the truism that each qualitative project was unique and, therefore, each endeavor to collect and analyze qualitative data implied a unique set of methodological details. I learned very quickly that there are nearly as many effective ways to do qualitative research as there are published studies of qualitative research. For that reason, we need to ensure that qualitative researchers receive the training needed to help propagate effective qualitative methods throughout the Management field.
On the back end of our scholarly process, we also need to provide more education for journal editors and reviewers on why a multitude of qualitative approaches is not only necessary but generative for the development of the field (see also Corley et al., 2021). Editors’ short-sightedness in rejecting a qualitative paper either because it doesn't follow a particular approach (which unfortunately happens) or because it does follow a specific approach (yes, unfortunately this happens too) is ultimately harmful not only to our qualitative community but also to the field more broadly. It confuses scholars on what good qualitative research looks like, potentially delegitimizing some insights in the eyes of some readers, while also dissuading other scholars from conducting and trying to publish qualitative research. Our goal should be to get as much good qualitative research published as possible, which will only happen once editors and reviewers appreciate the many ways qualitative research can be done well.
Ultimately, then, the issues driving this debate are great fodder for improving education about, appreciation for, and the propagation of qualitative research… but only if used for generative purposes. I urge you to reread the essays in this debate, as well as those in the ORM special issue, with a generative framing around how systematic and idiosyncratic approaches can produce high-quality empirical insights. Afterward, ask yourself, “What can I do to help further qualitative research and build the field's capacity for more inductive and abductive research?” I hope the answers we produce to this question will lead not to yet more arcane debate about the potential role of templates, but instead to a collective appreciation for how qualitative methods of all types can facilitate the achievement of our shared goal of developing insights that will help organizations and their members to be as healthy, humane, and productive as possible.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
