Abstract
Much qualitative research produces little new knowledge. We argue that this is largely due to deficits of analysis. Researchers too seldom venture beyond cataloguing data into pre-existing concepts and scouting for “themes,” and fail to exploit the distinctive powers of insight of qualitative methodology. The paper introduces a “value-adding” approach to qualitative analysis that aims to extend and enrich researchers’ analytic interpretive practices and enhance the worth of the knowledge generated. We outline key features of this form of analysis, including how it is constituted by principles of interpretation, contextualization, criticality, and the “creative presence” of the researcher. Using concrete examples from our own research, we describe some analytic “devices” that can free up and stretch a researcher’s analytic capacities, including putting reflexivity to work, treating everything as data, reading data for what is invisible, anomalous and “gestalt,” engaging in “generative” coding, deploying heuristics for theorizing, and recognizing writing as a key analytic activity. We argue that at its core, value-adding analysis is a scientific craft rather than a scientific formula, a creative assemblage of reality rather than a procedural determination of it. The researcher is the primary generative and synthesizing mechanism for transforming empirically observed data into the key products of qualitative research—concepts, accounts and explanations. The ultimate value of value-adding analysis resides in its ability to generate new knowledge, including not just the “discovery” of things heretofore unknown but also the re-conceptualization of what is already known, and, importantly, the reframing and reconstitution of the research problem.
Keywords
Introduction
We introduce an approach to qualitative analysis that aims to extend the reach and depth of researchers’ analytic interpretive practices and enhance the worth of the knowledge generated. Our “value-adding” approach to qualitative analysis was distilled from many years of practicing and teaching qualitative research methodology in a very distinctive research environment—the health sciences academy—where the philosophy, logic and practices of research tend to be epistemologically and methodologically very different from those underlying qualitative forms of inquiry. Health-related research is largely grounded in the “scientific method,” an approach to the production of knowledge and the inquiry process that resides primarily in rigorous adherence to proscribed rules and procedures for collecting and analyzing measurable (quantifiable) empirical data. In clinical/therapeutic research arenas, the randomized controlled trial is widely considered the most valid and scientifically superior expression of this method, and its logic dominates the weltanschauung, or worldview, of the field as a whole and positions qualitative inquiry as running against the grain of prevailing scientific orthodoxy (Mykhalovskiy & Weir, 2004).
Despite this, qualitative research has increasingly been accepted in the health research domain as a viable alternative to mainstream research (albeit mainly for “exploratory” or “descriptive” study of certain topics, like patient experience), and the demand for training in qualitative research has risen substantially in recent years (Eakin, 2016). In our view, however, much of the qualitative health research being done is underperforming and is generating little new knowledge. Data are often sub-optimally penetrated, with researchers venturing little beyond surface meanings and pre-existing conceptual frameworks, and failing to question extant understandings of the objects of inquiry. This inadequacy, we propose, stems from the investigators’ inability to maximize the potential of qualitative inquiry, most particularly its capacity for critical and creative analysis and interpretation.
Especially but far from exclusively in the health sciences, qualitative research curricula and texts tend to emphasize the early stages of research, particularly study design and data collection methods, reinforced by research funding models that provide most resources for the front end of research, often leaving underfunded the “heavy lifting” required of qualitative analysis and interpretation in the later stages of study. In all sites of qualitative research—educational programs, grant applications, publication—the theoretical underpinnings and practical methods of data analysis are much less clearly articulated than are other parts of the research process. For example, we note the scarcity of advanced level qualitative methodology courses in tertiary educational institutions (Eakin, 2016; Gastaldo et al., 2018). Prevailing models of qualitative analysis often start (and end) with various forms of coding and inventorizing of “themes,” a type of “bare-bones engagement” insufficient for analysis and interpretation of research findings (Mykhalovskiy et al., 2018, p. 614). More conceptually adventuresome, theory-informed analysis is variously unexplored, or dismissed as being inappropriate for applied research agendas (Thorne, 2016, 2020). In other instances it may be called for, but then left undeveloped within a mysterious methodological black box in which it remains unclear how the qualitative analysis was actually accomplished.
Weakness in the analysis component of the research process is, for us, the Achilles heel of qualitative research generally, undermining the potential of this form of science to produce more powerful knowledge. Our intent in this paper is not so much to critique existing approaches to analyzing qualitative data (although that is embedded in the strategies we put forward) as it is to advance an approach to analyzing qualitative data that is capable of moving beyond commonplace surface realities, expanding and liberating the analytic gaze, and unleashing interpretive imagination. We speak both to novice researchers learning how to engage with qualitative data and to experienced researchers who may welcome supplements to their existing analytic platforms, and perhaps even an opportunity to become aware of and “name” some of their own unexamined practices of interpretation and sources of insight. Although our approach has been shaped by our location and history in the health sciences context, we believe it has general relevance to qualitative researchers across a wide range of applied and disciplinary settings. Readers are invited, as they read through the paper, to reflect on how the various elements of this approach may (or not) resonate with their experiences in different research practice environments.
We start by elaborating briefly on the nature of value-adding analysis and its core conceptual properties. We then describe and illustrate some practical hands-on strategies or “devices” for thinking differently about data and for creating analytic pathways into data. Finally, we suggest why value-adding analysis is worthwhile and what it can contribute to the research endeavor.
Value-Adding Analysis
The notion of “value-adding” is borrowed from economics where it refers to the increased worth or value of a product created during stages of the production process. Applied to qualitative analysis, it refers to the increased value of the knowledge produced by a process of analysis that specifically reaches beyond “face-value” (self-evident) meanings of data and beyond prevailing commonsense conceptualizations and explanations for them. To add value to analysis, researchers need to be able to “penetrate” data and bring into view new possibilities for meaning and interpretation. The point is not so much that using stock pre-conceived notions is wrong, but more that it can prevent the analyst from seeing something as something else, from getting a quite different analytic sightline.
The conventional end point of much qualitative analysis goes little beyond the identification of “themes,” which tend to be understood as quasi-objective phenomena that reside in the data awaiting interpretation-free discovery. Value-adding analysis requires something different than the gathering together and reporting of empirically observable material: it seeks to construct out of grounded empirical data general concepts that characterize findings at a more abstract level. That is, value-adding analysts aim to “theorize” data, to relate concepts to each other, and to understand particular ground-level realities in more abstract terms.
In qualitative analysis, to theorize data is to enable findings to be generalized. Of course, we are referring here not to statistical generalizability, but to analytic or theoretical generalizability. If amply justified and grounded in empirical, real-world events, the concepts and theory generated to characterize a particular localized phenomenon can be used to gain insight into other substantively different but theoretically comparable phenomena (see for example, Alasuutari, 1996; Becker, 1998). That is, value-adding analysis aims to identify key abstract properties of the phenomena of interest, which can be used by readers to gauge whether findings might meaningfully be applied to situations other than those for which they were developed. For example, Gladstone et al. (2014) studied a peer support group for children of parents with mental illnesses. The authors conceptualized their findings in a way that showed how psycho-educational information focused on the symptoms of mental illness did not address what the children themselves were most concerned about, which was their perception that their parents were “different” from other parents (Gladstone et al., 2014, p. 1171). By theorizing their data in more abstract terms related to difference (rather than to their immediate substantive character), and by providing ample context and conceptual detail, the authors enable readers to judge whether and in what way these findings could be generalizable to other situations. This might include for example, relevance to situations in which children have a parent with other kinds of chronic illness or disability, or whom they consider “different” in some other way.
Theorizing empirical instances—progressively inter-relating concepts and framing findings in abstract terms—is what allows the concepts (the key products of qualitative inquiry) to be transported beyond the original study. The creation of generalizable concepts is a core objective of value-adding analysis.
The theory of value-adding analysis is one thing, but actually generating such value in a particular research project is quite another. As in qualitative research generally, there is no single or right way to “do” analysis, no procedures that will lead uniquely and directly to particular types of findings and analytic products. Value-adding analysis is not conducted using a pre-defined set of operations that yields fixed outcomes. The form the analysis takes depends on a varying set of attributes that include the individual researcher, the context of the study, and the unique nature, time, space and logic of individual happenings (Denzin, 2019). Important considerations also include the nature and purpose of the analytic quest, the ontological-epistemological orientation of the research, the positionality and standpoint of the researcher, and the conceptual “take” (form, emphasis, directions) that is developed during the research process.
Despite the absence of standardized procedures for doing qualitative analysis, there are a number of what we call “analytic devices”—practical operations, maneuvers, thought exercises—that can be deployed strategically by researchers to help penetrate their data, disrupt common-sense interpretations and achieve new conceptual purchase on them. These devices are not rooted in particular theoretical or methodological traditions. We propose that they can be taken up, adapted, and reinvented by researchers operating in different conceptual spaces. However, no research practices are theoretically untethered. To enable researchers from diverse traditions to assess the conceptual compatibility and value of these analytic resources for their own studies, we flag four of their key underlying and interrelated features: (a) analysis as interpretation; (b) analysis as contextualization; (c) analysis as “creative presence” of the researcher; (d) analysis as critical inquiry.
Analysis as Interpretation
Unlike many “realist” (or “positivist”) forms of health sciences that are rooted in the assumption that a true reality exists objectively outside human consciousness of it, qualitative forms of science lean toward the belief that what is seen as “real” is not independent of human perception, language, and knowledge (Eakin, 2016). We have no access to an objective external world that is unmediated by human senses and perception which in turn are shaped by the socio-cultural and material context in which meaning making and representation take place. Qualitative data do not speak for themselves but have to be interpreted (see for example, Freeman, 2014; Jardine, 1992).
Data do not exist independently of the practices that produce them because they are interpretive acts engaged in by both the subjects and the conductors of research. Interviewees recount facts and experiences and stories as they understand them, perceive them or perform them; researchers count some things as “data” and others not. 1 But from the moment of their creation as “data” (e.g. words come out of the mouths of an interviewee and are recorded) they and their meanings are transformed and re-transformed continuously throughout the research process. One striking example is how oral data can be altered during transcription of audio tapes. Typists sometimes “clean up” the data, consciously or otherwise, and in so doing represent study participants in ways that can reflect their relationship to and moral assessment of the participant or the issue. Sometimes, contextual information and speech patterns are eliminated or standardized; voice, pauses and intonation disappear or become indecipherable, thus becoming open to being read in any number of different ways, and the smallest of grammatical modifications can significantly change the meaning of what was said and the way the speaker is represented (see for example, Bischoping, 2005; Bucholtz, 2000; Tilley, 2003). Transcription is never a neutral process and transforms data (and hence meaning) in myriad subtle and not-so-subtle ways.
The analytic process beyond transcription is also a deeply interpretive act. The notion of “value-adding” draws on the idea that the data do not speak for themselves, and that it is the researcher who adds value to the research by interpreting data (assigning them meaning), by conceptualizing them (seeing them as instances of or types of more general or abstract concepts), and by “theorizing” them (linking, explaining and accounting for data and concepts). That is, researchers read, organize, and assign meaning to data, and produce “findings”—all of which is bounded (and energized) by the researchers’ theoretical perspective, bank of knowledge, personal experience, methodological repertoire, creativity and imagination. That both data and their analysis have to be treated as interpretation is fundamental to the operation of the analytic devices we will describe.
Analysis as Contextualization
In interpretive qualitative research, the meaning of data is understood to be produced by the context in which it is situated, the circumstances that form the setting in which data can be properly understood and assessed. This means that the analytic process needs to include a means to take context into account in assigning meaning to data. But every phenomenon of study is situated in multiple different types and layers of context—immediate social interaction, organizational, institutional, societal, cultural, historical to name just a few—all of which frame the object of inquiry in a different way. This means that analysts have to determine which context(s) matter to the data they are working with, and how to how go about using that context in interpreting data.
Although it is given lip service as a keystone strength of qualitative analysis, the contextualization of data is one of the most elusive processes in qualitative analysis and is rarely if ever articulated methodologically. Stenvoll and Svensson (2011) argue that the analyst constructs (rather than identifies) a context. Context is a direct result of the degree of interpretive investment the analyst makes in the process of contextualization. Interpretive choices are justified by anchoring (i.e. linking) the context to the data that gives it meaning. Interpretive investment requires different levels of justification, depending on whether the analyst is constructing a “literal,” or a “cued,” or an even more abstract, “absent-present”-context (Stenvoll & Svensson, 2011).
The analytic devices we will be introducing offer some guidance on how one can go about contextualizing data, interpreting meaning in relation to setting and circumstance.
Analysis as the “Creative Presence” of the Researcher
If analysis is interpretation, the scientific act rests heavily on the interpreter, which leads to another important premise of our analytic devices: the centrality of the researcher in qualitative analysis. In conventional “scientific method,” the investigator is seen as a hazard, a source of potential bias that puts at risk the objectivity necessary for determining scientific truth, truth that is seen as residing in the data themselves, outside of the researcher. Researchers must be removed or distanced from the research process, or at the very least adjusted for during analysis.
By contrast, in the value-adding form of qualitative analysis we are proposing in this paper, the researcher is viewed not as a source of bias but as a source of creative insight that is essential to the research task. Indeed, the researcher is expected to strive for presence rather than absence in the research process. During conventional science, the researcher’s role is to ensure validity through vigilant adherence to procedural protocol, and to interpret findings according to predetermined (usually numeric) criteria. During qualitative research the researcher’s role could hardly be more different, consisting of constantly interpreting and re-interpreting data as they are generated, trying out new and provisional conceptual and explanatory frameworks, revising data collection strategies in the course of the study, even sometimes abandoning or reframing the original research questions. Such a role requires researchers to draw constantly and deeply on their own experience, knowledge, insight, and imagination 2 to make sense of and analyze their data as the study unfolds. Methodologically, we call this the “creative presence” of the researcher: the researcher is very much present in the research process and is involved in a creative rather than procedure-centred task. The devices we present, as we will argue, help researchers exercise this presence and creativity during analysis.
Analysis as “Critical” Inquiry
Although there are various understandings of what it means to do “critical” research, we use the term here to characterize a posture toward research that aims to “trouble” received knowledge (Centre for Critical Qualitative Health Research [CCQHR], 2018, May 24; Eakin, 2016). The investigator does not take as given existing understandings of the phenomenon under study, instead inquiring into what is considered as “fact” and what the assumptions and social arrangements are that underpin how and why things are the way they are. Importantly, a “critical” posture also includes alertness to issues of power in whatever is being researched, questioning what is at stake for individuals, groups and institutions in any particular phenomena or situation, how power is exercised in that context, how it is related to knowledge and action, and how it is embedded in and exercised through language. The devices we present here help researchers see critically, to detect underlying assumptions and interpret data differently.
The related principles of value-adding analysis—interpretation, contextualization, creative presence, and critical posture—underpin the actual operational practices of analysis, but they do not tell researchers what to actually do in a practical sense. Despite the fact that there can be no fixed recipe for interpreting qualitative data, there are nonetheless actions that assist researchers in making sense of data. Such actions we call “analytic devices” for thinking creatively about data and for generating value-added findings.
Analytic Devices
In the spirit of Howard Becker’s (1998) marvelous “tricks of the trade” for sociological analysis, we coined the term “device” to refer to particular actions (thought exercises, tactics, strategies) that can lead researchers to “see” their data in different ways, ways that can open up new spaces for interpretation and reveal new possibilities for understanding and conceptualizing data. It is important for readers to not conceive of a “device” as a sort of mechanical procedure that produces a finding. Analytic devices are concrete actions that researchers can do, but they do not result in fixed outcomes, like a t-test. They are not ends in themselves but a means to analyze. They are resources for seeing and thinking with data. As we use the notion, “devices” are to be used by researchers to gain new and alternative sightlines on data, interpretive insights and inspiration. What matters are not the devices themselves, but what they enable in the researcher.
Conventional methods for analyzing qualitative data include for example, coding, identifying themes, the “constant comparison” techniques of grounded theory, member checking, and writing memos. In our view, however, these standard tools of qualitative analysis are often practiced sub-optimally in ways that restrict their analytic potential for value-adding analysis. Often, for example, coding is limited to the assignment of pre-conceived concepts to data that are then summarized into “findings,” themes are seen as empirical facts self-evident in the data awaiting assembly, comparisons are made between concrete material observations rather than between abstract conceptual properties, member checking is done for verification rather than analytic development, and memo writing is practiced more as an inventory exercise than as a vehicle for analyzing.
In the actual on-the-ground doing of qualitative research, however, analysis is achieved in ways that are very different from those characterized by methodological catechism. Researchers develop their own repertoires of experience-driven strategies, improvising and developing their own methods for the special challenges of qualitative analysis, such as those for taming the vast volume and types of data, for contextualizing meaning and for handling the multiplicity of interpretative possibilities. Researchers, however, seldom write (or perhaps even speak) about the informal methods they develop and use, perhaps in part reflecting dominant scientific norms that cast researcher-contingent research methods as too idiosyncratic and unstandardized to have scientific legitimacy.
There are as many of these invisible methodological methods as there are qualitative analysts and research projects. We share here several of the informal “devices” for analysis that we have found useful in our own research and that we have observed to be particularly helpful to our students. Some are general approaches while others are more particular and fine-grained. Most involve heuristic self-questioning and interaction with data, strategic thought exercises and forms of “reading” (dedicated review and interrogation of data in search of particular types of interpretive opportunities). All operate optimally in conjunction with each other.
Putting Reflexivity to Analytic Work
One elemental device is to use oneself as an object of inquiry (reflexivity) and to use this knowledge as a resource when analyzing data. Although reflexivity and positionality are hallmarks of qualitative research, such self-introspection is mainly invoked in the early design stages of a study and during the assembly of data. Reflexivity is less frequently articulated as a tool of analysis (although see, Alvesson & Skoldberg, 2009; Braun & Clarke, 2019; Srivastava & Hopwood, 2009, 2018). The researchers’ own knowledge and experience and self-analysis can be explicitly mined for what it can reveal about the phenomena under investigation and can be a valuable source of comparison and insight.
One way researchers can deploy their own knowledge and experience for analytic profit is constantly to probe their relationship with the subject matter by asking strategic questions that bring important but submerged issues to the surface. For example: How am I as a researcher reacting to this phenomenon or situation, and what does that tell me? An example of this comes from Eakin Hoffmann’s (1974) participant observational study of how stroke patients were cared for in a general hospital. The hospital staff had been observed to use high pitched cheery talk when communicating with patients. When the researcher found herself also speaking in that manner, she tried to think through why she would be doing so. This self-scrutiny lead to exploration of the social functions and organization of upbeat talk in health care and informed her analysis of why staff did what they did and how that framed the recovery experience of patients.
Other heuristic questions that graft on to reflexivity include: what stakes (interests, benefits, risks) do I have in the study’s outcomes?; how have data been influenced by interaction with me and what are the nature and dynamics of that influence? The point here is not to confess the influence (in a “limitations of the study” sort of way) but to use it as a source of insight into the phenomenon being studied. Of special importance is the question: Whose side am I on? 3 Where do I stand in relation to the key participants and interests at play in the research situation? From whose point of view are the research questions framed? For whom are the potential findings meaningful and useful? The notion of “side” can offend researchers, so deeply engrained is the belief that science is an impartial enterprise. But there is no neutral platform from which to conduct research, no position that lies outside of the inquiry or its context. It is crucial for researchers to know their stated or (usually) implicit “standpoint”: where they are positioned in relation to the subject matter and what that means for how they approach the analysis, and frame and write their findings (Eakin, 2010).
Despite its methodologic importance, standpoint is rarely explicitly acknowledged in academic texts. One common unacknowledged standpoint in health research is failure to recognize a managerial/administrative perspective as a “standpoint.” Many studies in the health field are “applied,” or designed to address practical problems of policy, practice, and service implementation, especially to make organizations and services more efficient or effective. Sometimes this stance is acknowledged overtly, elsewhere a concern with service delivery improvement is entirely implicit, appearing to be tacitly experienced by researchers as a “natural,” unaligned viewpoint not located in any position or “side.” Examples of standpoint in research and its consequences for analysis can be found in Eakin’s (2010) consideration of the literature related to health and safety in small work settings. Here she observes that framing a research problem from the standpoint of the workers can invoke the critique that it is advocacy-driven research done from a biased political position, while framing a research problem from the perspective of system or institutional improvement/management is considered to be a non-partisan platform for research, and is not regarded as a “side” at all. Knowing one’s standpoint as a researcher is key to critical analysis: it can help bring into analytic gaze assumptions and forces that might be driving certain interpretations, or stifling conceptualizations of the data that might be unpalatable or inadmissible to research stakeholders including fellow scholars, research sponsors, and the individuals or communities under study.
Reflexivity is an important fundamental device of analysis. Self-interrogation about the underlying interests and intent of the research and the participants/users of the research can bring to light underlying dimensions, processes and forces operating in the data, and can serve in identifying elements of the research problem that are not at the surface awaiting easy empirical observation. Reflexivity is central to the analytic process: for determining the unit of analysis of the study, for building a coherent conceptual apparatus (storyline) for the findings, and for maintaining a consistent perspective and voice in the text. Heuristic reflexive questions can help deepen the researcher’s imaginative reservoir, invite fresh insight on empirical material, focus the conceptual depth and breadth of field of a study, and avoid suffocation by the vast number of possibilities thrown up for investigation by qualitative inquiry.
Everything is Data
There are few mantras in qualitative analysis as enriching as the notion “everything as data” by which researchers are urged to be more inclusive and broad-reaching in what they use as “data” for analysis, and to not feel constrained to use only data formally designated at the outset of a project. Although Denzin (2019) and others are currently arguing that “data are dead” and the practices that produce them are under assault, we propose that our notion of “everything is data” adds much to the “value” in value-adding analysis, and suggest that researchers should venture beyond the transcribed content of interview transcripts or the texts of particular documents as laid out in their original research protocols. First, as we have shown in the previous discussion of reflexivity in analysis, researchers can make valuable use as data of their own responses to the research situation and findings. Further, however, researchers can draw upon and incorporate into their thinking all manner of opportunistically observed data that have something to “say” about the research issue or about conceptual formulations emerging in the course of analysis. However, it is not always self-evident what might have something to “say” about the phenomena of interest. Recognizing what might relate to a particular on-going analysis is a learned research skill that requires interpretive creativity and theoretical knowledge on the part of the researcher.
In our own research, there are many examples of data that we did not at the outset consider to be “data” but which made important contributions to an emerging conceptual focus. For example, in the course of an interview study of workers’ experiences with work injury, Eakin et al. (2003) the researchers noted how tense and full of conflict the process of seeking compensation was reported to be. To further understand the nature of this experience, the authors subsequently undertook a study of frontline service work at the compensation board (Eakin et al., 2009). Here, the difficult relationship between injured workers and the compensation agency was volubly and graphically expressed by the physical setting of the injured worker- compensation board encounter. The interior architecture of the compensation board’s office building (where injured workers met with their adjudicators who made the decisions regarding the eligibility of their claims) featured meeting rooms with transparent bullet-proofed walls and lock-down security systems. These observations constituted “data” that spoke powerfully to how injured workers were perceived within the institution, and to the emotionally charged social relationships the building was designed to contain.
Another example of the value of opportunistic “data” comes from the previously mentioned ethnographic study of the care of stroke patients in a general hospital (Eakin Hoffmann, 1974). An attendant positioned a wheelchair occupied by a patient who had had a stroke, seemingly haphazardly, facing into a blank wall, rather than into the busy hospital physiotherapy clinic area where the patient was awaiting treatment. This placement “spoke” to the attendant’s perception of the patient as not fully sentient, as not socially present, and helped the researcher understand and conceptualize how these patients were understood by hospital staff.
Although an “everything is data” stance is important throughout a research project, it is especially useful during analysis where it can spark the researcher’s interpretive imagination and invite the linking of disparate observations and facts. Not only should researchers be alert to noticing new data from outside the existing data set that might be relevant, but they should also feel primed to recognize new meanings for “old” existing data. An example is from Eakin et al.’s (2003) research on injured workers and compensation claiming where an excursion into the literature on the broader issue of social welfare spawned a re-interpretation of what injured workers were saying when they insisted that work injury compensation was “not welfare.” Here, the “everything is data” approach consisted of drawing into the analysis (not just into the discussion section) findings from other studies that had conceptual/theoretical rather than direct substantive relevance to the topic at hand.
It is important to note that from an ethical perspective, an “everything is data” orientation does not give researchers some sort of carte blanche, or free-rein, to collect formal research data outside those sanctioned by the research’s ethics approval agreement (e.g. departing from agreed upon samples or interview schedules). Rather, “everything is data” refers to the research value of observing the informal, subtle social and material facts, circumstances and meanings embedded in the research context and in public places, and to use such “data” to deepen and enhance the interpretation of the formally structured and collected data. An “everything is data” stance enables researchers to make use of what is already in their heads—past experiences and observations of related “data” in the public domain and from the literature. Once researchers have an observation or idea born of what they have seen or read they cannot just erase it from their minds just because such “data” were not formally enumerated in a research ethics proposal. Of course, not all such “data” can or should be included as empirical evidence in the ultimate written analysis, certainly not if they can inappropriately identify individuals or specific institutions. The point is less to cite everything that is used as “data” (impossible anyway) as much as to use lots of different and diverse material for analytic insight.
An “everything is data” posture can be a liberating and invigorating analytic device because it frees researchers of dependence on confining data sets defined before the study got underway, encourages lateral thinking, and helps them creatively to flesh out and diversify dimensions of the topic being explored and see generic qualities of particular phenomena (i.e. “theorize” or frame findings in more abstract terms).
Reading for the Invisible
An extension of the “everything is data” device is the possibility of drawing meaning not just from materially evident data (like words in a transcript, or observed actions and events) but also from data that are not directly observable empirically, that are invisible to the usual research eye. This invisibility takes several forms. First, meaning can reside “between the lines” of an interview transcript. In analyzing data, researchers need to become actively attuned to where and how what is actually said might be a surrogate or a stand in for something else. Meanings are embedded not just in explicit representations but also in innuendo, doublespeak, metaphor and other linguistic elements of communication (Mauthner & Doucet, 1998; Stenvoll & Svensson, 2011). It is important to “‘read” 4 data for such indirectly articulated or expressed meaning (see for example, Gladstone et al., 2014).
In a similar way, researchers need to read data for silence: what is not said or done at all. Silence is a form of data whose value lies in absence rather than presence. What is not said can tell one much about a particular situation (Poland & Pederson, 1998). Silence can be accessed by being attentive to it, looking for it, and by following up with such questions as: why can certain things and not others be spoken and in what circumstances? Is silence a personal and individual response or is it an, “elephant in the room,” that may be more socially governed? Considering what is not being said and why can reveal as much as what is actually said and can produce breakthrough analytic realizations for the researchers. Kawabata and Gastaldo (2015) provide an example of the interpretive importance of silence in analyzing interviews with homeless men in Japan, a “collectivist” society in which silence is a culturally grounded part of communication. The authors’ reflexive approach brought out underlying meanings of silence, such as acknowledgment of a shameful past.
The notion of invisible data might also include those related to “performance.” Here the analyst might ask of particular data not just what a person is saying but what is the person doing with what is said, what is achieved (consciously or otherwise) by the words and their expression. Analysts might ask, “how does saying X position the speaker and what does it enable (or preclude)?” An example of the notion of performance is from Eakin et al.’s (2003) study of work injury and return to work. When workers recounted the circumstances in which they had been injured, and the details of their injuries, it became possible to see the data as performance, to interpret them as the workers’ efforts to handle the ambient social risks of work injury (blame, stigma, disbelief in the authenticity of their injuries) by representing themselves in particular ways through their words and mode of communication. For example, workers spoke in such a way as to convey that they were not at fault for the injury, were not willingly off work, and were properly motivated to get back to the job. That is, injured workers were trying to position themselves as honest and worthy claimants of compensation support. Here the data revealed more than their face-value sense; read for their performative dimension they became understandable in a very different way, and constituted evidence of the underlying social forces at play in what the workers were saying and doing, i.e. that employers, other workers, compensation professionals, and even the general public started from a position of suspicion regarding the legitimacy of work injury claims. Read as performance, words can communicate something quite other than their literal face meanings and open up new directions and avenues for analysis.
Reading for Anomaly
Qualitative analysis has a certain methodological lean toward identifying sameness, such as searching for similar words or phrases that appear repeatedly in transcripts data, or collecting together recurring topics and considering them “themes” or indicators of “saturation.” The alternative, reading for anomaly (what is different or doesn’t “fit”) is less often articulated as an analytic possibility.
It can be hard to see difference, perhaps because researchers are drawn more easily into what is familiar, what is already known. In his call for us to view society and individual experience through a sociological lens, C. Wright Mills’ identifies the need for us to “make the familiar strange” by questioning what is seen as “normal,” what is taken for granted (Mills, 1959). The parallel in this paper is the value of making seemingly ordinary data “strange” and in need of explanation. There are lots of ways to make strange, one being the dissection of key common terms used in a data set or research field, as did Becker (1993) in his classic investigation of how he found out what the designation of patients as a “crock” meant in the common everyday language of medical practitioners.
In addition to making strange, reading for anomaly in data can come about by attending closely to what seems surprising, what is unexpected. Alertness to unanticipated data can yield conceptual insight into elements of the situation that might be at play and flag fertile sites for further inquiry, or re-interpretation of existing concepts. In Gladstone et al.’s (2014) observational study of support groups for children of parents with mental illness, a surprising presence of humor in the data led to exploration of the role of humor in how children navigate their social situations. For example, examination of the sites and social organization of humor brought into view how the children make claims on one another and challenge ideas and beliefs about their circumstances without saying anything too explicit, which helped them manage difficult emotions and powerful others (most often adults) in situations where they are unsure of what is expected of them.
Reading for anomaly might also include attentiveness to contradictions or conflicts in the data—negative or deviant cases as they are often called in qualitative research. Spotting (and using) data that do not support patterns or explanations emerging from data analysis is important in the process of revising concepts and theory, and can stimulate potentially profitable new lines of questioning. Upon considering a contradiction in some interview data, for example, one can ask “what is happening here?” and explore possible alternative interpretations: does this indicate a flaw in the current conceptualization, the interviewee’s ambivalence or self-presentation, or the presence of an element of something not yet grasped by the analyst?
Watching for anomaly is an analytic heuristic that can prompt critical interaction with the data and help the researcher get underneath surface interpretation and “see” new possibilities in the data. Importantly, a single case, observed analytically for its difference (from other cases) can have a pivotal role in a value-adding analysis (see for example, Jardine, 1992). Researchers need to cast off the legacy of quantity-based research and recognize that a particular property of a phenomenon might rarely—even never—actually occur empirically but can nonetheless be essential to understanding it and be a lynchpin to what is transpiring in the data.
Generative Coding
The activity of coding is the most common default device in qualitative analysis. There is a past and current critique of coding as a methodological strategy (including suggestions of alternatives and the possibility of not using coding in any form 5 ) but here we are considering only the limitations of prevailing approaches to coding and proposing an alternative form that is better positioned to produce value-adding analysis.
Coding, as conventionally practised in qualitative inquiry, is different from coding in value-adding analysis. First and most importantly, the function of conventional coding is often seen as residing in its result: a code term is assigned to segments of data considered similar or related and then collected together to represent content of the data set. By contrast, in value-adding analysis, the function of coding lies less in how it summarizes content and more in how the coding process can induce ideas in the researcher. Here coding has a “generative” function that helps the researcher in conceptualizing and interpreting the empirical data. That is, the process of coding is itself a core analytic device, a core means of analyzing data.
In conventional qualitative coding practice, codes are often treated as categorical markers of facts and concepts residing self-evidently and unproblematically in the data. Coding consists of classification of data by assigning various labels to data segments, which are then grouped and organized, often into catalogues of so-called “themes.” The concern of this form of coding lies primarily with issues of accuracy, completeness and reliability, and the code categories typically consist of and are portrayed as being “found” in the data.
A further feature of much conventional coding that distinguishes it from generative coding is that code labels assigned to segments of data are often pre-packaged or ready-made concepts gleaned “off the rack” from the literature or stock public understandings, and this can restrict what can be learned from them. One example of ready-made codes and their implications for research is the ubiquitous model of qualitative study that explores reasons why people do or do not avail themselves of a particular health resource or service. In a study, say, of why women do or do not go for mammogram screening, the core research concepts tend to be defined in advance. “Getting a mammogram” might be conceived of (or tacitly assumed to be) a motivated health-related behavior influenced by “factors” such as knowledge, rational assessment of perceived risk, and cultural understandings. Codes assigned to interview data would come primarily from existing conceptual models from the literature incorporating notions like “delay” or “social support” or “information.” Such codes are assigned to explicit articulations by participants with the assumption that they can in fact know their own motivations, and that their accounts are true and mean what the researcher thinks they mean. This approach to coding may preclude alternative interpretations of the data. For example, an observed action may not be best characterized as “delay”: it may not reflect the rational “choice” implied in that charaacterization, and may not even be a “health-related” act from the perspective of the actor.
A final feature of much contemporary coding is that it focuses, sometimes entirely, on only one arm of the coding task. Coffey and Atkinson (1996) describe two functions of coding: 1) the consolidation, reduction and simplification of data (for retrieval and sorting and other data management necessities), and 2) the expansion, opening up, and complication of data. The second of these is much less well described in methodology texts or research publications, but it is this function of coding that is most needed in value-adding analysis. Here is the generative element of coding, involving not the cataloging of data via ready-made concepts but the construction of custom-made codes powered by the critical creative analytic insight of the researcher. In generative coding, rather than being applied to data, codes are generated by the researcher interacting with and interpreting the data.
Generative coding is thus not a mechanical application to data of pre-conceived labels, but a slow and thoughtful process of creating concepts and linking them. To be generative, coding involves constant re-consideration and re-conceptualization of the concept at hand and its relationship to other codes, accompanied by continuous parallel memoing (writing) about the ideas and associations, questions and possibilities and linkages underlying the “code” label and/or thrown up by the coding process (writing as an analytic device in its own right will be discussed further later on).
Coding is thus a fertile site of deep analysis (where codes-in-progress are shaped and reshaped) and a birthing place of new concepts (or a graveyard for those that lose their analytic promise). The coding process is where concepts are linked with other concepts, with more abstract or local expressions of them, and melded into the gradually emerging “storyline” of the analysis. It is impossible in this short paper to describe or illustrate much of this approach to coding, but what should be clear is that coding is a prime site of the foundational analytic processes described earlier: it is where data are creatively and critically interpreted and contextualized, using all the critical and imaginative analytic resources the researcher can bring to bear on them.
So how is “generative” coding actually done in practice? Since such coding is a quintessentially interpretive act, there is no recipe for doing it. But there are concrete activities that can help kickstart and nourish the creative momentum of coding. One is to maintain a dynamic code list (a “code book”) that both records and produces the evolution of codes as they develop in the on-going analysis. Detailed characterization and thinking about a code changes with each incremental use in a data set, as new features of or faults with codes become evident to the researcher. As data are read and digested, codes previously assigned are qualified, re-defined, merged with others, or abandoned altogether. The point is not to apply a static fixed meaning to a code which is then applied to a segment of data, but to do the reverse, to use the empirical data segment to reflect on and refine the code/concept—its meaning, its foundational features, its limitations and qualifications, from whose perspective it is, what events or notions it includes or excludes and so on. Coding is both a product and an engine of conceptualization. Documenting the development of the code concept is a vital 6 , dynamic site of metamorphosis of the conceptual infrastructure of the analysis.
At a practice level, generative coding is a slow, iterative process of re-conceptualization that feeds the researcher’s ability to “grow” new concepts beyond the categorization of already-known entities, and to weave them into a theoretically coherent picture or account. This kind of coding is “generative” in the sense that it creates something new. It is a creative analytic process, achieved by the researcher developing concepts in “continuous dialogue with empirical data” (Becker, 1998, p. 109).
For all its value, however, this approach to coding—detailed investigation of discrete segments of data and drilling down on meanings and interpretations—can have the effect of fragmenting the data, of losing the sense that comes from combining or linking segments into a bigger picture. A device to counter this narrowing vision is to read the data for “gestalt”.
Reading for Gestalt
By assigning discrete labels to data through coding, the “gestalt” (a conception of the whole being greater than the sum of its parts) can be obscured or neglected altogether. By breaking down data into constituent parts, coding may seem to make analysis more manageable, but it makes it difficult to interpret data in broader context, to link codes and concepts, and to understand contradictions between them. Reading for the gestalt, seeking insight into the bigger happenings going on in the data can provide some of this interpretive context, and can provide quite different perspective on the subject matter.
Material for reading data holistically can come from within the data (threads of more general forces governing what people say and do), and/or it may come from without. Hollway and Jefferson (2000) describe how theory and reflexivity can be used to explain contradictory elements within an interview, or in a series of interviews within the same family. Such reading of the gestalt can be bypassed by researchers, sometimes because of commitment to “giving voice” to the interviewee or to privileging experiential, respondent-centred accounts (Hollway & Jefferson, 2000, p. 80).
Strategies to interact with and open up and interrogate data, to carry out a more holistic analysis, to work with and move beyond coding, include, for example, producing data summaries, and inserting reflections on the wider context into memos. Atkinson (1992) describes “extended reading strategies” using fieldnotes to counteract a “fragmentation culture,” and Frost (2009) uses the stories embedded in participant interviews to apply a narrative analyses that can read for a whole that is greater than the sum of its many “data” parts.
Reading for the gestalt is important not just for interpreting data in context, but also for moving more substantive-level codes and concepts to a higher level of abstraction and theorization. However, consideration of the gestalt—the bigger story—is an interpretive act itself: analysts choose one larger picture among many possibilities for understanding and representing their data. As with other dimensions of value-adding analysis, there are no methodological rules for how to do this, but the ensemble of analytic devices we have been laying out can trigger analysts to see different possible gestalts and their consequences for the emerging analytic story.
Heuristics for Theorizing
To “theorize” one’s concepts and findings is to move the proceeds of generative coding into more abstract form, to see commonalities and connections between the particular case and other more general, complex or higher-level phenomena. Some qualitative researchers call for empirical description that does not prioritize theory (especially for applied practice-oriented research, for example, Thorne, 2016, 2020). However, for value-adding analysis, we propose that there are few things as useful as theory for making sense of empirical data, even in practical application-oriented research.
How to actually go about theorizing data, however, is seldom described in practical terms (see an exception, MacFarlane & O’Reilly-de Brún, 2012), which has kept this key activity in the darkest corners of the methodological black box of qualitative analysis. There are, however, no discrete procedures for theorizing, only heuristic devices for thinking theoretically about empirical findings. A useful place to start is Howard Becker’s chapter (1998, pp. 109–143) on generalizing sociological concepts above and beyond the particular. His armamentarium of “tricks” for thinking abstractly include trying to imagine the question that one’s data are the answer to, letting the case (empirical observation) define the concept rather than letting the conceptual category define the case, and describing the case without using any of the identifying characteristics of the actual case (such as particular roles, organizations or settings). Such maneuvers can help analysts to see a particular specific entity in more abstract terms, and to identify generic features of the phenomenon under study that can be “generalized” (shown to have something in common with or be transferable to other people or phenomena near and far from what is under immediate study).
Our own research suggests other heuristic devices for facilitating the conceptualization and theorization of specific observations, many of which take the form of self-provoking questions that can bring into view new and more general properties of data or concepts. One example is “What is this an instance of?” where we ask what more comprehensive entity or process a finding or observation could be an occurrence of, a part of, a type of. In Eakin et al.’s (2003) study of the experience of work injury among workers and employers in small workplaces, two empirical “facts” were initially discerned: (a) the belief that injured workers cheat the compensation system to get off work by lying about or exaggerating their injuries, and (b) the dismay of injured workers at being seen as dishonest and at not being “believed” with respect to their injury claims. These data were coded separately at a substantive level (e.g. respectively as “cheating” and “disbelief”), but the question “what is this an instance of” led to recognition of more general concepts that seemed relevant to/related to them both, such as “stigma” and “deservingness.” Combined with serendipitous data from outside the formal study—a legal aid professional insisting that work compensation should not be thought of as “welfare” (note the “everything is data” principle in operation)—the focus of the analysis shifted to comparing the work injury compensation system with other welfare and disability support systems to explore the conditions that might foster claim suspicion and judgments of (un)worthiness. Ultimately, the “what is this an instance of” question helped crystalize the more general theorization of the findings as expression of a broad, society-wide “discourse of abuse” regarding the use of public support services.
Theorization constitutes much of the “value” of value-adding analysis. But there is no formula for how to “do” it, or for what will lead to the “best” theoretical framework to make sense of a particular data set. There are multiple and many possible theoretical scaffoldings for any data, and how a researcher comes to see one or any possibilities depends on the individual researcher’s disciplinary background, knowledge, and imagination (creative presence). Abstraction can occur spontaneously (“ah ha!” moments) or be brought into focus by various devices for deliberate thinking and by consistently reading (and drawing on) literature inside and outside of our disciplinary training and substantive topic.
Writing as Analysis
In qualitative research, writing does not simply summarize the output of analysis (“writing up” the results of analysis), it actually plays a central role in creating the findings. Writing itself is a part of analysis: to write is to name a phenomenon (transform it into words), and to name is to conceptualize (express the nature of the phenomenon and its relationship to other phenomena). But at the same time as writing helps analyze (make sense of) some reality, writing also “constitutes” that reality. That is, naming brings something into existence, makes something “real.” Thus, to write is to analyze, and to analyze is to write.
Written expression of a thought about the subject matter is an analytic act because it requires concepts to be selected that capture something in your data, that connect in some way to empirical data, and that play a part in the emerging research story. Although almost everything that is written down, especially at the beginning of a project, is destined to be changed or abandoned, the process of writing and re-writing is nonetheless critical to the gradual distillation of what works, matters and coheres in the analysis. Writing, we propose, is the most important device for analyzing qualitative data. It is also the most widely used device for analyzing, although it is rarely acknowledged as such. We argue that recognition of the role of writing as an instrument of analysis is fundamental to marshalling its very significant creative force in value-adding qualitative research.
Writing matters not just at the end of analysis, but throughout the entire research process, in all places where ideas (however nascent or polished) can be put into written form: in the memos and annotations created during generative coding, in reflexive journals and other note-making systems, in data summaries, in storyline development and so on. That is, researchers should be writing analytically from the very outset of a project. Students often put off writing (or are reluctant to do so) because they see writing more as documentation than as interpretation, more as a means of presenting results than as a means of doing analysis. They may feel uncomfortable putting incomplete thoughts into writing, or be unwilling to take risks with potentially unworkable or inadequate concepts and texts. Writing is not about recording already generated, fully- formed ideas. Rather, in the process of writing the ideas are assembled from threads and elements and crafted into “findings.” Writing is a vehicle for articulating, framing, sifting, organizing, questioning and putting shape to the myriad analytic elements and ideas whirling around in the head of the analyst. Ultimately, writing is a key route to conceptualizing and theorizing research findings. Researchers write in order to become conscious of what they want to say, what they can say, how they can say it.
Writing is key to the analytic process primarily because it is based in language, and language is pivotal to the creation of meaning. Language—including form, structure, grammar, semantics—is central to how meaning is constructed and communicated. Language is never neutral; it always casts the subject matter in a particular way, positions the researcher and the researched in particular locations, embodies a particular theoretical and political perspective. The words, adjectives, verbs, and metaphors used in describing or representing data constitute and create meaning. Language represents research subjects as certain kinds of people. For example, because diction and grammar reflect much about the social position of the speaker, even slight changes to “verbatim” transcript data through transcription (such as removing textual markers like “umms” or correcting grammatical errors) can mis-convey important things about a participant and mis-inform ensuing interpretations.
Language matters. For example, writing “she claimed that…” rather than “she said that…” changes the meaning of the observation by casting doubt on the veracity of what was being said and perhaps also subtly discrediting the speaker. Language frames the research findings as particular kinds of accounts. Literary tropes can convey findings in very different ways and can carry judgmental undertones like empathy or blame toward certain subjects. Further, language and narrative structure are key mechanisms for differentiating between the voice of the researcher and that of research participants. Researchers need to be highly attuned to what language “does” in creating meaning and deploy it carefully and self-consciously.
Like other aspects of analysis, how to write analytically and how to analyze through writing are not often adequately described (although see, Augustine, 2014; Evans, 2000; Golden-Biddle & Locke, 2007). A practical device for anchoring writing, as a centerpiece of analysis is what we call “title work”: using the study’s title as a site of intense thought and precision analysis. Researchers need always to have a title-in-progress that describes what the research project as a whole is saying. Researchers may resist such a task seeing the title as only a final flourish for when research is complete and the findings are fully determined. We propose, however, that from day 1 of a project, researchers need to center a working title in their analytic activities and return to revisit it constantly. Working on the title is an invaluable device for analyzing because it both forces and enables the researcher to take continuous stock of what the title says about what they are finding, doing, and highlighting, and how it represents their current thinking about the research focus and story. This exercise is an invaluable device for keeping on top of how the analysis is evolving because the title captures key concepts, the main point of the research, the relationship between the core and subsidiary elements of the analysis, and so on. The analytic benefits of working on a title are enhanced by a colon dividing the title into two parts because it obliges researchers to clarify for themselves what their main point it, which is the general versus the particular idea, what field or readership the research is speaking to. Titles are microcosms of both the minutiae and the big picture of the research and title work provides a demanding but productive analytic workout.
Writing as an analytic act is also central to writing the results of a study. Qualitative findings cannot be represented in histograms and tables where the underlying analysis producing the results is commonly understood and assumed by the reader. Qualitative results must make the analytic pathways that produced the findings clear enough for the reader to understand and find compelling. Further, in qualitative research, “writing up” the results is not just a process of presenting already fully developed findings. Qualitative results take their final form during rather than prior to the writing process. This is not just because of the nature of the conceptualization process described earlier but also because many additional analytic considerations arise during the later stages of writing results, such as how to find the core story around which the sub-stories pivot, how to capture the attention and interest of the reader, 7 how to frame a compelling narrative structure and literary style, how to stay accountable to the data on which the analysis is built, how to use the right kind and amount of evidence to make an interpretation robust and convincing, how to balance the voices of participants and researchers, and so on. Moreover, as with all other sites and forms of writing in qualitative research, writing the research findings is not a neutral activity. The narrative structure and the wordsmithing of the text is imbued with meaning, value and standpoint.
Full elaboration of the nature and importance of writing as an analytic device is not possible here (a helpful resource is Golden-Biddle & Locke, 2007). We can mention one useful device for exploiting the centrality and power of language in writing results. We often ask our students to describe their research findings in a single sentence (Sandelowski, 1998). Although framing a study in one sentence can be an outrageously demanding exercise, it can help researchers separate the core from the peripheral points and liberate researchers bogged down in multiple complex lines of thought. Once stated, the single sentence summary can serve as a guide to researchers in tethering and orienting their analysis and in deciding which ideas and data must stay and which must be let go. As we described earlier with title work, the summary sentence is also a work-in-progress, but its presence and constant revision functions as a useful part of the analytic process.
Writing is a primal and crucial device of qualitative analysis that pervades the research process. Writing is simultaneously a means of analysis (forcing the formulation and crystallization of interpretations and concepts) and a result of analysis (expressing and representing the outcomes of the research). Because of the constitutive role of language in analysis, writing is never neutral, and is thus political as well as scientific. Writing is the most important methodological element in the black box of qualitative analysis. Writing, however, most especially in the later stages of constructing findings, can be a slow and very demanding process that takes considerably more time and energy to accomplish than this stage of research is often accorded within university degree programs or research grant protocols. 8
The Value of Value-Adding Analysis
Our opening observation was that qualitative research (particularly in our field, the health sciences) is often practiced in a way that generates too little new knowledge, and we proposed that this arises largely from deficits of analysis. When researchers venture no further than cataloguing data into pre-existing concepts and scouting for “themes,” they fail to exploit what qualitative methodology does best, and squander its distinctive powers of insight. To address this analytic weakness, we put forward an approach to analysis that strives to “add value” to the output of qualitative research by deepening the interpretation of data and enriching their conceptualization. We described ways to “do more” with qualitative data by freeing up and stretching researchers’ capacity for “seeing,” characterizing and accounting for data, for achieving critical perspective, and for transforming particular local findings into more abstract generalizable knowledge.
The products of qualitative research—concepts, accounts, explanations—are created by the analyst, but they are not fabricated out of thin air. Tethered to empirically observed data, they draw their interpretive and analytic strength from the researcher’s bank of theoretical knowledge, methodological acuity, personal experience, and imagination. At its core, value-adding analysis is a scientific craft rather than a scientific formula, a creative assemblage of reality rather than a procedural determination of it. The researcher is the primary generative and synthesizing mechanism, with the “devices” operating to disrupt existing knowledge, bring new things into sight, and jump-start critical perspective and theoretical ideas.
The ultimate value of value-adding analysis resides in its ability to generate new knowledge. One type of new knowledge is the classic notion of “discovering” things heretofore unknown, such as elements or forces at play that have not been previously identified. But new knowledge also includes the re-conceptualization of what is already known, such as questioning and reconfiguring what is presently understood, assumed or believed. And perhaps most significantly, new knowledge can come in the form of a reframed research problem: revealing the inadequacy of the original formulation of the research and re-defining what is in most need of conceptualization and explanation. Perhaps the most prized endpoint of “doing more” with qualitative data is not an answer but a better question.
Footnotes
Acknowledgment
The authors acknowledge that a version of this article (entitled “Inside the Black Box of Qualitative Analysis” ) will appear in the following textbook: Bosi, M. L. M., & Gastaldo, D. (Eds.) (2020–in press). Tópicos Avançados em Pesquisa Qualitativa em Saúde: Fundamentos teóricometodólogicos [Advanced topics in qualitative health research: Theoretical methodological foundations].
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
