Abstract
The purpose of this article is to assess whether divergence of grounded theory method may be considered valid. A review of literature provides a basis for understanding and evaluating grounded theory. The principles and nature of grounded theory are synthesized along with theoretical and practical implications. It is deduced that for a theory to be truly grounded in empirical data, the method resulting in the theory should be the equivalent of pure induction. Therefore, detailed, specified, stepwise a priori procedures may be seen as unbidden or arbitrary. It is concluded that divergent grounded theory can be considered valid. The author argues that securing methodological transparency through the description of the actual principles and procedures employed, as well as tailoring them to the particular circumstances, is more important than adhering to predetermined stepwise procedures. A theoretical foundation is provided from which diverse theoretical developments and methodological procedures may be developed, judged, and refined based on their own merits.
Keywords
In the year 1965, Glaser and Strauss asserted that when investigators “are convinced that their analytic framework forms a systematic substantive theory … then they are ready to publish their results” (Glaser & Strauss, 1965, p. 288). Two years later, Glaser and Strauss' seminal work titled The Discovery of Grounded Theory was published. In this tome the authors labeled “the discovery of theory from data … grounded theory” (Glaser & Strauss, 1967, p. 1). Subsequently, it has been affirmed that grounded theory was developed by Glaser and Strauss (1965) while studying the interactions of hospital personnel with dying patients. Their approach was more fully enunciated in 1967 (e.g., Wells, 1995). Indeed, The Discovery of Grounded Theory can be seen as an elongated description or explanation of the method, and this volume has remained the foremost reference point for researchers using the grounded theory method (Bryant & Sprague, 2002). However, the method has been elaborated and advanced by its originators and their students (Wells, 1995); it has diversified (Heath & Cowley, 2004), including adaptations in ways that may not be completely in line with all of the original principles (Goulding, 1998; Gurd, 2008).
Originally, grounded theory evolved chiefly in the field of health, with a focus on nursing (Dey, 1999). The method has subsequently been employed in a wide array of fields. For example, within management (cf. Jones & Noble, 2007) or business, grounded theory method has been applied to areas such as financial reporting (e.g., Hussey & Ong, 2005), buyer decision processes (e.g., Sternquist & Chen, 2006), consumer experience (e.g., Daengbuppha, Hemmington, & Wilkes, 2006), and marketing management (e.g., Baines & Egan, 2001), among others. However, the deployment of grounded theory method has frequently been limited to generic references to the practice of a grounded theory approach (Jones & Noble, 2007; Pettigrew, 2002) (cf. Gurd, 2008; Hausman, 2000; Rust, 1993; Strauss & Corbin, 1994). Indeed, grounded theory has without a doubt become vogue in part by the studies published under its heading, even though these rarely do more than indicate the methodological procedures employed (Dey, 1999). In fact, the term grounded theory “has been adopted rather indiscriminately by a variety of researchers over the past decades to denote qualitative studies that are at best tenuously based on the methodology they formulated” (Geiger & Turley, 2003, p. 581). Such incomplete applications and divergence from its original principles may be viewed as symptomatic; they raise questions as to the essence of grounded theory. While imperfect matching between the original principles of grounded theory and subsequent applications does not rule out that there is a common essence across applications, the divergence may be fundamental. The latter case would indicate that the term is losing its denotation (cf. Gurd, 2008). Importantly, the result may be confusion among scholars and practitioners as to what method was employed in a specific study, together with associated difficulties in evaluating the results. As Dey (1999) noted, while grounded theory has attracted broader interest, it is feared that “diffusion has led to dilution of its basic canons of inquiry” (p. 13). However, while some have argued that corruptions of grounded theory “place its credibility at risk” (Wilson & Hutchinson, 1996, p. 124) (cf. Gurd, 2008; Jones & Noble, 2007), others have argued that the diversity is consistent with the spirit of grounded theory (e.g., Joannidès & Berland, 2008). Consequently, this article assesses whether divergence of grounded theory may be considered valid.
The remainder of this article is organized as follows: The logic behind grounded theory method is outlined. Central features are derived which serve as a background to the general research process. The general research process introduces a more detailed and stepwise description along with comparisons between two schools associated with Glaser and Strauss, respectively. Finally, based on this review, the essence of grounded theory is synthesized together with theoretical and practical implications regarding the validity of methodological divergence.
The Logic Behind Grounded Theory
Glaser and Strauss (1967) argued for the discovery and generation of theory from systematically obtained data and labeled it grounded theory (Cohen, 1969). Certainly, the approach can be counted as one of those qualitative methods close to the empirical data: grounded theory, phenomenology, and inductive ethnographics (Alvesson & Sköldberg, 1994) (cf. Strauss & Corbin, 1994). Furthermore, it has been maintained that the origin of grounded theory lies in symbolic interactionism (Siguaw, Simpson, & Baker, 1998), and the importance ascribed to empirical data can be maintained while following this thread. The term symbolic interactionism was invented by Blumer (1937), and indeed, Blumer's development of the interactionist approach, together with naturalistic inquiry, are key influences on grounded theory (Heath & Cowley, 2004). According to Blumer (1937), the view taken by symbolic interactionists is that “the human infant comes into the world with an uninformed, unorganized, and amorphous nature” (p. 152). The infant only expresses discomfort and/or stress under the influence of an impulse. That is, the development of the infant is a matter of forming organized activity in place of its previous random activity through the influences of impulses (cf. Blumer, 1937); that is, presumably through, in a sense, empirical impulses. Later, Blumer (1940) viewed an analogous separation between the conceptual usage and empirical investigation as a major dilemma. Blumer (1940) concluded that there is a need for a working relationship between concepts and the (empirical) facts of experience, where the former can be checked by the latter, and the latter ordered again by the former.
It has been noted how Glaser and Strauss (1967) credited Blumer (1940) with recognizing the problems presented by the gap between ungrounded theories and, conversely, by many empirical studies unguided by any theories (e.g., Wells, 1995). Furthermore, Blumer (1969) dealt with symbolic interactionism as a perspective in empirical social science, “designed to yield verifiable knowledge of human group life and human conduct” (p. 21). Accordingly, Blumer (1969) argued, the methodological principles have to meet the fundamental requirements of empirical science. Along the same thread, Cohen (1969) and Goulding (1998) have acknowledged that grounded theory is to be contrasted with the strategy of generating theory by logical deduction from a priori principles, that is from previously founded principles. Indeed, the danger with logical deduction from a priori principles, in the view of Glaser and Strauss (1967), is that the theory may be used in an opportunistic way, with doubtful fit and as an attachment in the closing pages of some very empirical research. That is, research which is, as it has been put, taken up with the rhetoric of verification. Moreover, according to Glaser and Strauss (1967), when testing hypotheses derived from the grand formal theory, the researcher may become less faithful to the data, force the data to fit the theory, and become less sensitive to the concepts emerging from the data (Cohen, 1969).
Ontology
The initial text that laid out grounded theory was a polemical work directed against deductive forms of theorizing (Dey, 1999). However, a review of the work of Glaser and Strauss, as well as others quoted and cited in the central texts on grounded theory method, bare a more or less total silence about any of the central developments in epistemology and philosophy of science during the past 30 years. In fact, the absence of any reference to positivism or interpretivism, or any substantial discussion of Kuhn or Popper, can be noted. But given Glaser and Strauss' (1967) position, it is apparent that grounded theory method was influenced by positivism and scientism from the beginning and that presumably neither Glaser nor Strauss would have seen this as a problem (Bryant & Sprague, 2002). In fact, grounded theory is not, as sometimes has been assumed, based on a rejection of the possibility of a science of human behavior, of hypothesis testing, or of experimental designs or quantitative data (Wells, 1995). Rather, Glaser and Strauss' argument is towards emphasizing generation in contrast to, or at least to balance out, the notion of verification, which is not arguing for qualitative versus quantitative methods (Cohen, 1969; Wimpenny & Gass, 2000). This is reflected by Alvesson and Sköldberg (1994), who concluded that while grounded theory touches post-modernism, it in general has a non-critical attitude, and it shares characteristics with positivism, such as a strive for objectivity, general statements, and view of the empirical as relatively free from theory. Certainly, neither Glaser and Strauss (1967) nor Blumer (1969) rejected the existence of an empirical world (Wells, 1995). Specifically, it has been said that grounded theory combines the depth and richness of qualitative interpretive traditions with the logic, rigor, and systematic analysis inherent in quantitative survey research (Walker & Myrick, 2006). To sum up so far, not only does grounded theory refrain from arguing for or against qualitative and quantitative methods, it spans what is qualitative or quantitative, it separates between what is theoretical and empirical, and it is to be contrasted with the strategy of generating theory by logical deduction from a priori principles. The latter deserves elaboration, which in turn warrants turning attention to induction.
Not surprisingly, there is no single consistent comprehension of the logic of induction or its function in science (Chopra & Martin, 2002) (cf. Lawson, 2005). However, the familiar definition of deduction is that “it is going from the general to the particular, where the ‘general’ are a priori (i.e., ‘certain’) truths” while in contrast induction is “going from the particular to the general—the ‘particular’ being ordinary empirical observations” (Fritz, 1960, p. 132) (cf. Graziano & Raulin, 1993). Certainly, this meaning of induction and deduction is reasonable, and the distinction is sound (Fritz, 1960). Following or employing this distinction then, grounded theory method must be based on some form of induction to avoid a priori principles. If grounded theory method is not induction, it is based on a priori principles. The researcher constructing a grounded theory is to suspend “all prior theoretical notions” (Tesch, 1990, p. 23). Hence, in essence, grounded theory has to be induction. However, a critical paradox can be pinpointed here: While grounded theory is to be contrasted with theories developed by logical deduction from a priori principles, the method itself describes a number of a priori principles to be employed. This fundamental issue will be further explored below. Specifically, the following central features of the approach will be employed to further illustrate how, from this perspective, grounded theory method is anything but free from a priori principles.
Central Features
Features considered as central to grounded theory include: constant comparison, coding, theoretical sampling, memoing, theoretical sensitivity, and saturation (e.g., Boychuk & Morgan, 2004; Fendt & Sachs, 2008; Lambert & Loiselle, 2008; Mills, Chapman, Bonner, & Francis, 2007; Walker & Myrick, 2006).
Constant Comparison
Constant comparison is seen as all-important to grounded theory (McGhee, Marland Glenn, & Atkinson, 2007) and may be viewed as a combination of two types of data analysis processes. In the first type, the analyst codes all data and then systematically analyzes the codes to verify a given proposition. In the other type, the analyst does not engage in coding data as such but merely inspects the data for properties of categories, uses memos (introduced below) to track the analysis, and develops theoretical ideas (Glaser & Strauss, 1967; Walker & Myrick, 2006). It has been argued that the constant comparative method together with theoretical sampling (introduced below) comprise not only the core of qualitative analysis in the grounded theory method but of other types of qualitative approaches as well (Boeije, 2002; Wells, 1995). Comparing allows the researcher to engage in the activities required to develop a theory, as it is put, “more or less inductively” (Boeije, 2002, p. 393). The constant comparative analysis technique means contrasting data against itself, against evolving original data, and against existing theoretical and conceptual claims (Boychuk & Morgan, 2004). The process is maintained through all sorts of aids including close reading, rereading, coding, displays, data matrices, and diagrams (Boeije, 2002), which demand voluminous notes and/or tape recording transcripts as well as the examination of the data many times from different perspectives (Douglas, 2003).
Coding
Constant comparison analysis involves coding strategies (Goulding, 1998). Constant comparison and coding can be viewed as twin processes (Piantanida, Tananis, & Grubs, 2004), as data is analyzed and compared through the course of coding (Kendall, 1999). The coding procedure entails breaking the data into parts, into distinct units of meaning, which are tagged or labeled (Goulding, 1998). Specifically, codes are names designated by the researcher to events, activities, functions, relationships, contexts, and so forth. Moreover, codes constitute the foundation for subsequent aggregation into concepts (core codes) (Douglas, 2003). These concepts are first assembled into descriptive categories. Then they are evaluated again with regard to their interrelationships and by a number of analytical steps gradually ordered into higher order categories or into an underlying core category, which suggest an emergent theory (Goulding, 1998).
Theoretical Sampling
Constant comparison also comes together with theoretical sampling (Boeije, 2002; Draucker, Martsolf, Ross, & Rusk, 2007; Neill, 2007). Although theoretical sampling is not a clearly described process (Neill, 2007), it implies that the researcher decides what data will be collected and where to find it based on tentative theoretical ideas (Boeije, 2002). Then, when the research is underway, theoretical sampling implies that sampling is decided by the analysis of previous data (Neill, 2007), making it possible to answer questions that have arisen from the analysis of and reflection on previous data (Boeije, 2002). Theoretical sampling “is the process of sampling events, situations, populations and responses” in the inductive generation of theory (Douglas, 2003, p. 51). Douglas (2003) noted how grounded theory employs theoretical sampling to sample events that indicate categories, their distinctive characteristics, so that they can be developed and related to each other.
Memoing
Memoing is another feature which has been seen as a criterion for grounded theory (Boychuk & Morgan, 2004). It has been declared that “memoing is the theoretical writing-up of ideas, separate from the data, that focuses on relationships between codes and their properties as they become evident to the analyst” (Boychuk & Morgan, 2004, pp. 609–610). Memos themselves are written theoretical questions, coding summaries, and/or hypotheses of various scope (e.g., a sentence, a paragraph, or a few pages) used to keep track of and promote coding, theory integration, and theory generation. Specifically, memos are written continuously throughout the whole research process, including the observation and analysis stages (Boychuk & Morgan, 2004; Douglas, 2003). In fact, Piantanida et al. (2004) believe that Glaser's, as well as Strauss and Corbin's, emphasis on researcher memoing is their way of continuously integrating interpretation into the constant comparative process.
Theoretical Sensitivity
The idea of theoretical sensitivity was first elaborated by Glaser (1978) and then by Strauss and Corbin (1990) (Orland-Barak, 2002). Theoretical sensitivity refers to the capability to think about the data in theoretical terms. To expand on the term data, it is argued that there are three paramount categories of data employed in grounded theory research: field data (notes), data from interviews (such as notes and recordings), and more broadly, any literature or artifacts that can be serviceable to the research. The overall consideration is the generation of primary data that are captured in the exact words and explanations of the actual respondents themselves (Douglas, 2003). It is argued that theoretical sensitivity is the ability of the researcher to work with the data in both theoretical and sensitive ways. That is, the researcher can theoretically and conceptually think about the data from a distance, while simultaneously maintaining an in-close level of sensitivity and understanding about the process and their involvement in that process (Walker & Myrick, 2006).
Saturation
In the end, the goal is to develop an explanatory theory—to reach saturation, which in the context of grounded theory implies a point where categories are completely explained and accounted for, and when relationships between them have been assessed (cf. O'Reilly & Parker, 2013).
The point is that the above features have been described and elaborated by many scholars to a high degree of intricacy with regard to how, where, and when they are to be employed. That is, a level of intricacy and normative elements may be seen that, according to the perspective of the present article, is not in agreement with theory not based on a priori principles. The issue will be further elaborated below, but at this juncture, the problem may be demonstrated by, or even rendered down to, the mix of grounded theory as a method and as a theory. Grounded theory “assumes that part of the method, itself, is the writing of theory. The way data is coded, ideas are memoed, and memos are sorted are all partly focused on designing and facilitating the writing of the theory” (Glaser, 1978, p. 7). The quotation illustrates how central themes have been said to, only partly, focus on designing and facilitating the writing of theory. A further exploration of the dilemma with the normative elements and their validity warrants an investigation of the research process.
The Research Process
Grounded theory method accentuates systematic data collection, analysis, and handling (Douglas, 2003; Glaser, 1978). As noted by Wells (1995), Glaser and Strauss (1965) illustrated both the features of grounded theory and the central characteristics of the constant comparative method on which it rests: Glaser and Strauss (1965) made a first selection to study the phenomena at hand. In particular, the authors selected a hospital ward to study interactions between staff and dying patients. Later Glaser and Strauss selected wards for further study on the basis of their analysis of interactions on the first ward. Selection, data collection, and analysis continue through such iterations until the authors identify a core idea that can account for variability in interactions (Wells, 1995). As Goulding (1998) noted, the theory forms during the procedure itself, through the ceaseless iterations of data collection and analysis.
Douglas (2003) acknowledged that the process demands simultaneous collection, coding, and analysis of data as the basic activity. Emphasis is on the respondent's own interpretations and intentions, with the least possible researcher interference. This may be seen reflected by Walker and Myrick (2006), who argued that the principal interference into the data is coding. However, the coding process stretches further than the general notion of coding because the repetitive process demands admitting change to early coding as the researcher moves in the direction of theory generation (Douglas, 2003). Similarly, Wells (1995) noted that once the researchers recognize the core idea, they need new examples of interaction to elaborate it: Elaboration is contingent on the process to confirm and to disconfirm the concepts and their relationships. The procedure, it is argued, continues up to the point where no new concepts can be recognized and an effective and parsimonious explanation can be suggested.
It has been contended that the core category must appear often in the data, which implies that it is more connected to other categories. It will consequently take longer for the core category to be adequately elaborated in terms of its distinguishing attributes and connections to other core codes. The core category constitutes the base for developing more formal theory, and the level of development will advance as the core category is analyzed and adjusted. Specifically, a category code should explain a considerable part of the variation in an issue being studied, an event or pattern of behavior, whereas the circumstances and outcomes are portrayed by other core codes. The intent is to methodically obtain categories that serve as focal concepts, which in turn contribute to theoretical development. Categories are coded until they are dense with theoretical meaning (Douglas, 2003). In the end, however, according to Wells (1995), the theory depends on, as it is put, imaginative and artful interpretation of the data.
An angle to the overall problem pertains to what, exactly, comprises “systematic” data collection compared to non-systematic, and a “core category” compared to a “non-core category.” On the one hand, what is systematic runs the danger of being systematic only in relation to what is less systematic. That is, with no rules clearly defining what systematic is and what non-systematic is, there may always be something to label systematic. A similar point can be made with no specific number of connections defined, or with no rules of thumb referring to how many connections a category must have for it to be a core category. Hence, the distinction between core and non-core category appears to be a self-fulfilling prophesy in that if a theory is developed, core and non-core categories can presumably always be found and labeled accordingly. In other words, the concepts appear too loosely defined to allow explicit identification, which means that they describe what assumedly always takes place when anyone generates a theory. The distinctions lose their meaning. On the other hand, if the concepts were to be more explicitly defined, they would even more heavily underline that grounded theory, in this sense, is based on a priori principles. Of course, this dilemma may be seen as an extension of the previously mentioned paradox of grounded theory being contrasted to a priori principles.
Another now discernable part of the problem pertains to coding and, specifically, to the bases for coding. Turning attention back to symbolic interactionism, Blumer (1937) argued that the development of the infant is a question of shaping organized activity in place of previous random activity (based on the influence of impulses). Hence, coding as an activity, being non random (i.e., systematic), is presumably a result of previous impulses, a priori impulses, or theory resulting from previous impulses. Even Glaser and Strauss admit that the researcher will not enter the research area without ideas (Heath & Cowley, 2004). Arguably, coding is systematic and non random, and thus based on theoretical notions of how phenomena are related. In this sense, pure induction, in practice, becomes impossible. Stated differently, grounded theory method is seemingly caught between being based on a priori principles or being induction, while pure induction may be questioned altogether. However, as acknowledged by Wells (1995), although the primary procedures describing the original devising of the constant comparative method have stayed the same, Glaser (1992) and Strauss (i.e., Strauss & Corbin, 1990) currently define and use the method in different manners. They seem to disagree substantially when it comes to the function of literature in relation to a priori ideas (Heath & Cowley, 2004), which warrants an assessment and further elaboration in light of a comparison between Glaser and Strauss.
Glaser Compared to Strauss
The differences between Glaser and Strauss have attracted much attention (e.g., Cutcliffe, 2005; Goulding, 1998; Heath & Cowley, 2004; LaRossa, 2005; Walker & Myrick, 2006). It has been argued (i.e., Walker & Myrick, 2006) that much of the controversy between the two authors revolves around different perspectives in terms of the role and degree of the researcher's intervention in procedures. Moreover, Douglas (2003) concluded that the differences revealed since Glaser and Strauss' shared publication (i.e., Glaser & Strauss, 1967) can be seen in Glaser (1992) emphasizing that it is essential to be more creative and less procedural in methodology. Strauss (i.e., Strauss & Corbin, 1990), in contrast, has communicated a more strict one-dimensional method. For example, Glaser's (1992) statement, “Too many method rules get in the way; they block emergence” (p. 71), may be seen as a contrast to Strauss and Corbin's (1990) proclamation that “memoing and diagramming begin at the inception of a research project and continue until the final writing” (pp. 198–199), together with their listing of fifteen explicit features of memos and diagrams. That is, Strauss' work with Corbin (i.e., Strauss & Corbin, 1990) is generally more prescriptive than Strauss' original work with Glaser (i.e., Glaser & Strauss, 1967) (Wells, 1995). The versions are different to the degree that references to grounded theory are incomplete if not given together with references of the specific school and even period of development to which it pertains (Geiger & Turley, 2003). In this article, the processes are presented in accordance with Heath and Cowley (2004). As acknowledged by Heath and Cowley (2004), it should be noted that the original tome (i.e., Glaser & Strauss, 1967) depicted only two levels. The first pertained to coding into as many categories as possible, and the second pertained to integration of categories. Glaser (1978) also depicted two levels, that is, substantive and theoretical coding, but the former actually consists of two processes (open and selective coding) (Walker & Myrick, 2006). Hence Glaser (1978) advocated substantive (open), substantive (selective), and theoretical coding while Strauss (i.e., Strauss & Corbin, 1990) advocated open, axial, and selective coding. It should also be noted that the coding stages were not intended to be separate and linear in either the original publication or in subsequent individual contributions by Glaser or Strauss (Heath & Cowley, 2004).

Comparison of two processes. Adapted from “Developing a Grounded Theory Approach: A Comparison of Glaser and Strauss,” by H. Heath and S. Cowley, 2004, International Journal of Nursing Studies, 41(2), pp. 141–150.
Open Coding Compared to Substantive Coding
Walker and Myrick (2006) acknowledged how both Glaser (i.e., Glaser, 1978) and Strauss (i.e., Strauss & Corbin, 1990) viewed open coding as the initial step of the coding process (cf. Douglas, 2003). Open coding demands that transcripts are analyzed, as it is put, word-for-word, line-by-line, and phrase-by-phrase. The aim is to initiate the unrestrained labeling of all data (coding in as many ways as possible) as well as to assign representative codes to accented incidents in the data. As the process advances, iterative comparison of what was previously coded to novel data results in a progressive advancement. It is important to note that it is not data on their own that build conceptual categories, their characteristics, and the developing theory. It is the theorization of data and their phenomena that produces grounded theory. The theory becomes grounded in the data; the theory is not the data themselves (Douglas, 2003). It has been noted that according to Glaser there is no structure considered in advance to follow. Only patience, persistence, and iteration through constant comparison will take the researcher to emergent categories and their characteristics. Open coding is complete when the researcher starts to perceive a potential for a theory that can account for the entire data (Walker & Myrick, 2006).
In contrast, Strauss' (i.e., Strauss & Corbin, 1990) version of open coding differs on two significant aspects. First, Strauss and Corbin (1990) believe that specifically finding the dimensions of a category's properties, for example the dimension “short” to “long” for the property of distance, is a central mission of open coding. On this point Glaser (1992) has argued that Strauss and Corbin skip too far ahead in the process by developing the dimensions and that grounding therefore is usually lost. Second, according to Glaser (1992), forcing questions and elaborate techniques are not required if the data is allowed to speak. In contrast, Strauss and Corbin (1990) have contended that theoretical sensitivity is attained through the employment of specific analytic tools such as questioning; analysis of words, phrases, or sentences; the so-called flip-flop technique; and making close-in and far out comparisons (Walker & Myrick, 2006).
Axial Coding Compared to Selective Coding
Strauss and Corbin's (1990) intention with this stage has been recognized as to re-assemble the broken down data in novel ways through appointing connections between categories and their subcategories. Such axial coding is done using a so-called paradigm, focusing on “the conditions or situations in which phenomenon occurs; the actions or interactions of the people in response to what is happening in the situations; and the consequences or results of the action taken or inaction” (Walker & Myrick, 2006, p. 553). It has been claimed that during axial coding, work is geared towards comprehension of categories in relation to other categories and their subcategories. Moreover, in the context of axial coding, Strauss and Corbin (1990) have made several references to verification, validation, and deductive thinking, to which Glaser (1992) disagrees (Walker & Myrick, 2006).
Glaser (1992) believes that Strauss and Corbin's (1990) above mentioned coding paradigm imposes one coding family on the data and through this procedure forces the data into a conceptual description. Indeed, Glaser's second step is labeled selective coding, but this is the second half of Glaser's first stage (substantive coding). Selective coding, according to Glaser (1992), implies the conversion from open coding, to limit the coding procedure circling a core category. It has been argued that this type of selective coding carries very little resemblance to axial coding, but that both approaches seem to carry an ingredient of selectivity (Walker & Myrick, 2006).
Theoretical Coding Compared to Selective Coding
In the final phase, the researcher is faced with merging the data around a central theme to produce a theory (Walker & Myrick, 2006). It has been pointed out how a theory, according to Strauss and Corbin (1994), is a set of associations that offer a credible explanation of the phenomenon being studied (Goulding, 1998). Specifically, Strauss and Corbin (1994) have used the label selective coding, which again should not be confused with Glaser's selective coding step (Walker & Myrick, 2006). Selective coding, in accordance with Strauss (i.e., Strauss & Corbin, 1998), is focused on what emerged as a central core category (Douglas, 2003). Selective coding is “the process of integrating and refining categories” (Strauss & Corbin, 1998, p. 143).
In contrast, Glaser's (1978) final stage has been known to entail merging through theoretical coding. This is a stage that may be more comparable to Strauss' axial coding. Glaser's stage, however, implies a broader array of perspectives on the data than Strauss'. Theoretical coding, in accordance with Glaser (1978), is the procedure of employing theoretical codes, which emerge from cues in the data, to conceptualize how substantive codes may relate to each other as hypotheses to be merged into a theory (Walker & Myrick, 2006).
Embedded Theory
Although the approaches associated with Glaser and Strauss respectively are different, it may be questioned whether one of the approaches is really more correct than the other. While Glaser's version may be viewed as more descriptive and less prescriptive than Strauss', it may be questioned whether the differences are meaningful. If the differences are meaningful, grounded theory is arguably not induction. That is, if the differences are meaningful, then by logic, they imply a priori rules, and hence the method itself constitutes a priori rules. On the other hand, if the differences are not meaningful, what then is grounded theory? Walker and Myrick (2006) noted that it has been questioned whether Strauss and Corbin's paradigm is not paradoxical compared to the emphasis on emergence and discovery in grounded theory. Indeed, the “paradigm seems to impose a conceptual framework in advance of data analysis, it does not seem to sit easily with the inductive emphasis in grounded theory” (Dey, 1999, p. 14).
If the two versions are measured against the original formulations, Glaser's version has been seen as more adequate (Charmaz, 2000) (cf. Walker & Myrick, 2006). However, it also seems that the version associated with Strauss and Corbin has more recently evolved into a less rigid (cf. Charmaz, 2000) and possibly less prescriptive posture. But assuming that Glaser is more adequate and less prescriptive, a closer look at the issue of theoretical sensitivity may illustrate an additional angle to the problems. According to Glaser (1978), intense reading allows the researcher to develop theoretical sensitivity, while in contrast, the first step in acquiring theoretical sensitivity is to step into the research environment with as few a priori determined ideas as possible. With as few a priori determined ideas as possible “the analyst is able to remain sensitive to the data by being able to record events and detect happenings without first having them filtered through and squared with pre-existing hypotheses and biases” (Glaser, 1978, p. 3). Still, other theories are supposedly treated as part of the data and in turn compared to the developing theory. According to Glaser (1978), it is required that the researcher treats everything as data; regardless if the researcher's “material is research data, others ideas on it or the literature, it is to be compared to the ongoing data and memos for the purpose of generating the best fitting and working idea” (p. 7). It has also been noted how, for Strauss, broad comprehensions of past experience and literature are influences that grant sensitivity, but so is also specific comprehensions that can be used to create hypotheses (Heath & Cowley, 2004). In short, the notion of “as few a priori ideas as possible” combined with intense reading and treating other theories as data is arguably paradoxical. Entering the research environment with as few ideas as possible, or induction, does not conform well with treating other theories as data (cf. deduction). Moreover, the idea of any researcher functioning without any biases or pre-existing hypotheses, as suggested by Glaser (1978), may be questioned against the background of, for example, symbolic interactionism, as the essence of organizing in itself may be viewed as biasing.
An analogous problem can be seen in constant comparison. The comparative analysis technique can be seen as housing deduction, built in through the forming of ideas and reasoning based on the same ideas, as well as through subsequent selection and comparison to new data (based on the a priori ideas). Indeed, this observation may not appear controversial against the background of science as characterized by the combination of inductive and deductive thinking, where both processes are needed (e.g., Graziano & Raulin, 1993), something that is in fact suggested by Strauss and Corbin themselves (cf. remarks with regard to deductive reasoning associated with axial coding, above) (Walker & Myrick, 2006) (cf. Munkejord, 2009). A fruitful comparison can be made to Popper and Einstein. The former has declared that his view of science is an extension of Einstein's, and according to Einstein, a pre-judgment is present when the scientist initiates research; otherwise, how can any selection of facts be made? Einstein asserted that the scientist initiates research in a nearly altogether opposite approach to the inductive. The origin of scientific theory is not the facts at hand on their own, but hypotheses founded upon the scientist evaluating and grouping facts. The scientist has preconceived judgment and has to assume or infer a hypothesis not necessarily based on facts, but which has to be tested by experience, that is, the hypothetico-deductive arrangement of scientific theory (Avshalom, 2000) (cf. Lawson, 2003). In Einstein's view, a theory can be identified as inaccurate if, for example, a fact does not conform to its consequences (Avshalom, 2000). Popper (2002) analogously has argued that observation is selective. A choice has to be made with regard to, for example, an interest or a problem. Induction, or inference based on many observations, according to Popper (2002), is an unsubstantiated belief, not part of either ordinary life or of scientific method (cf. Lawson, 2005). Certainly, a fundamental assumption of, for example, Dewey and Mead's theory of mind, is that the mind has no extraordinary physical matter that is different from the body or other physical matters. The mind is built up of the same materials that come into play when environmental interactions play themselves out, such as when lower animals or even inanimate things interact. Past consequences of past activities are employed to allow subsequent change in activities (Wynne, 1952). Indeed, neurological models, as well as results from experimental research, have suggested that induction over a limited number of facts has no function in reasoning, and human beings seem to treat information through hypothetico-deductive thinking (Lawson, 2005) (cf. Lawson, 2003).
Turning attention to Blumer (1937) once again, a direct comparison can be made to the view taken by symbolic interactionists and how humans make sense of the world under the influences of impulses. Presumably, the individual organized through impulses emits a response at some level, given the stimuli at hand, regardless whether the individual is engaged in a process labeled grounded theory. From this perspective, the somewhat organized individual is organized a priori and this organization, under the stimuli at hand, controls what has been labeled an inductive reasoning process, including grounded theory method. Hence, while any process named grounded theory (e.g., open, axial, and selective coding) may focus on a relatively specific area of interest and the constant comparison process itself can be seen as housing deduction, the process is also always embedded in what is generally a priori at each and every level in the process (see Figure 2). Figure 2 illustrates a simplified model of how such an embedded research process in theory may be elaborated infinitively. The focal stages of the process (in this case labeled in accordance with a grounded theory approach) become a priori assumptions relative to succeeding stages in the process, but also because outcomes may adjust preceding a priori assumptions (the two way arrows), which in turn may result in further adjustments of assumptions. Comparable processes can be assumed to take place outside the relatively specific area of interest in that more peripheral knowledge influences and is influenced by (here illustrated horizontally) the area of interest in the centre. Presumably, what is an a priori assumption is relative in time to what is a posteriori.

Embedded grounded theory method.
In light of the above, the discovery of grounded theory in the year 1967 (i.e., Glaser & Strauss, 1967) may be questioned. It can be assumed that when useful theories have been absent, theories have, since the dawn of humanity, been grounded in empirical data. Induction was hardly discovered in 1967. Moreover, from the above perspective, there are always a priori assumptions, hence pure induction is impossible. The existence of any single grounded theory method and the validity of adhering to original or specific principles (cf. Fendt & Sachs, 2008) can likewise be challenged. While grounded theory has been used to generate theory where little is already known, things are indeed known (or assumed). Even if the amount of prescriptive- or descriptiveness that can be ascribed to techniques associated with Glaser and Strauss differ, prescriptive procedures or assumptions are there and known by those employing the techniques (to the best of their knowledge). That is, a priori assumptions exist and have been elaborated since 1967. It can be contended that without a method in terms of how research is to be carried out, there is no method. But with such a method there is a priori theory and not merely induction, regardless if the method is labeled a perspective. It would follow that the method for creating knowledge, discovered by Glaser and Strauss, is one of the ingredients which paradoxically makes the so-called inductive methodology non-inductive. If the benchmark, while presumably non-achievable, is set to true induction, the prescriptive a priori rules or assumptions themselves are unbidden, and to the extent to which they are there, arbitrary. From this perspective, there is not one single form of grounded theory method; methodological divergence or diversification is valid. Valid divergence in turn arguably promotes tailoring to the specific research issue and situation at hand rather than strict adherence to a specific practice. This may be more important as a method is deployed within more diverse research fields. Whether a theory is grounded or not simply becomes a question of whether it has been empirically tested. Based on the above, a theoretical platform may be founded which allows a plethora of methodological approaches to be developed and judged on their own merits. The amount of grounding, that is, testing, or the closeness to the empirical data, may be tailored depending on the specific research question or situation. As a logical consequence, showing the kind of testing, how it was carried out (cf. Mauthner, 2000; Schulenberg, 2007) and to what extent, becomes of primary importance. Put differently, methodological transparency through the description of the actual principles and procedures employed, and adapting them to specific research objectives and settings (cf. Figueroa, 2008), is more important than adhering to entrenched principles or stepwise procedures. Ideal types or classes may be developed based on, for example, degree of grounding, or on classification in term of reliance on the researchers' own a priori assumptions versus other researchers' a priori assumptions (theories). Researchers diverging from the original or specific principles of grounded theory method may have known the above intuitively (cf. Jones & Noble, 2007). Others may have merely been short of a formal theoretical justification for tailoring their approach to specific circumstances even though they may have thought it would be adequate. In fact, this may explain the behavior of making generic references to grounded theory rather than specifying what was done. In such cases, the consequence may be that evaluating results becomes problematic, which in turn may hamper progress within the field of study.
Conclusions
Divergent grounded theory can be considered valid. For a theory to be truly grounded in empirical data, the method resulting in the theory should be the equivalent of pure induction. Thus, detailed specified stepwise a priori procedures may be seen as unbidden or arbitrary. Paradoxically, without a procedure in terms of how research is to be carried out, there is no method. But with such a method there is a priori theory and not merely induction, regardless if the method is labeled a perspective. The constant comparison technique together with theoretical sampling arguably implies deduction. The constant comparison technique itself may be characterized as embedded in a priori assumptions. Reasoning in general may be characterized as hypothetico-deductive. Hence, any notion of pure induction can be challenged, and grounding a theory may be viewed as testing the same empirically. The extent of testing, the closeness to the empirical data, may be tailored to the specific study at hand. As a consequence, communicating what kind of—and the extent of—testing that was carried out in a study becomes of primary importance, rather than adhering to specific rules or procedures. Presumably, pure induction may be employed as a benchmark to compare and classify different approaches in terms of, for example, empirical focus. This way a theoretical platform may be created for nurturing a plethora of approaches, each tailored and evaluated on its own merits.
