Abstract
The concept of saturation in qualitative research is a widely debated topic. Saturation refers to the point at which no new data or themes are emerging from the data set, which indicates that the data have been fully explored. It is considered an important concept as it helps to ensure that the findings are robust and that the data are being used to their full potential to achieve the research aim. Saturation, or the point at which further observation of data will not lead to the discovery of more information related to the research questions, is an important aspect of qualitative research. However, there is some mystification and semantic debate surrounding the term saturation, and it is not always clear how many rounds of research are needed to reach saturation or what criteria are used to make that determination during the thematic analysis process. This paper focuses on the actualisation of saturation in the context of thematic analysis and develops a systematic approach to using data to justify the contribution of research. Consequently, we introduce a distinct model to help researchers reach saturation through refining or expanding existing quotations, codes, themes and concepts as necessary.
Introduction
Qualitative research is assessed by assessing if all the necessary information to address the research questions has been gathered and evaluated, specifically if a point of data saturation has been achieved (Kerr et al., 2010). Saturation is a crucial notion in qualitative research, as it guarantees that the data have been thoroughly examined and that the findings are strong and trustworthy (Glaser & Strauss, 1967). “Saturation refers to the point at which no further data can be found, preventing the sociologist from further developing the characteristics of the category.” The state of theoretical saturation is achieved by the combined process of gathering and examining data (ibid., p. 61). Saturation in qualitative research denotes the stage at which the data collection and analysis have been exhaustively examined and comprehended, and no additional themes are emerging. When saturation occurs, the researcher will ascertain that their research process has completely or almost filled a gap in theory. However, above and beyond its obvious importance, the idea of saturation is a subject of difference among qualitative researchers who have differences in opinion regarding its meaning, clarity, and practicality (O’Reilly & Parker, 2013; Strauss & Corbin, 1998). Indeed, it is unclear what the term actually encompasses (Bowen, 2008) and what specific process is required to claim that saturation has been accomplished (O’Reilly & Parker, 2013). Fundamentally, rule-of-thumb estimates are used to determine how many observations or interviews are needed and claims of saturation are sometimes made without specifying the criterion used to support the claim (Glaser & Strauss, 1967).
Saturation is a concept employed in qualitative research to ascertain the threshold at which the gathered data is sufficient to address research inquiries or substantiate assertions and findings posited by the researcher. This ensures that the data collected is relevant and comprehensive (Charmaz, 2006; Kerr et al., 2010). Saturation helps the researcher make informed decisions about data collection and analysis. Researchers utilise the notion of saturation to assess whether to continue data collection, stop it, or modify research methodologies to investigate areas that lack enough saturation in the study process (Malterud et al., 2015; Roy et al., 2015). This study largely focuses on the data collection stage to determine saturation. In contrast, this research particularly investigates both data collection and theme analysis to measure saturation. Saturation, under the framework of thematic analysis, denotes the point at which no further themes or patterns are discerned from the scrutinised material. Presently, the data has been fully utilised to generate innovative insights and augment the existing knowledge repository. This means that the researcher has discovered all significant themes, codes and patterns within the data set, and that all of these are fully interpreted so as to add new knowledge (Naeem, et al., 2023). Saturation is the analysis and interpretation of data to their full potential. The aforementioned notion is critical in qualitative research due to its capacity to augment the practicality of the collected data, thereby bolstering the validity and dependability of the conclusions drawn from the thematic analysis. This paper aims to illustrate how thematic analysis may be further improved as a means of ensuring saturation. Indeed, through iteratively refining and expanding upon pre-existing patterns, codes, themes and concepts, we provide a framework for achieving saturation at various stages of the thematic analysis process.
Revisiting Saturation in Qualitative Research: Integrating Current Literature Insights with the Proposed Saturation Approach
The concept of saturation is pivotal in qualitative research, signifying the point at which sufficient data have been collected and analyzed to comprehensively answer research questions.
As Ganle (2016) have demonstrated with their narrative approaches and focus group results, respectively, saturation affects the volume of information collected as well as the quality of the analysis carried out. Nevertheless, the implementation of saturation theory is not without of difficulties. Kerr et al. (2010) argue that the use of saturation in qualitative data analysis can provide challenges. Guest (2006) and O’Reilly and Parker (2013) both acknowledge the challenge of evaluating saturation due to the lack of practical standards and clarity in identifying sufficient levels. Saturation is a multifaceted and repetitive procedure that aims to develop a thorough understanding by continuously gathering, collecting, and analysing data, as described by Glaser and Strauss (1967). Charmaz (2006) highlight that saturation pertains more to the comprehensiveness of the theory rather than the size of the sample. Dube et al. (2016) and Boddy (2016) agree, asserting that saturation should impact both the applicability and theoretical soundness of investigations. Urquhart (2013) and Birks and Mills (2015) have offered other perspectives on saturation, emphasising the formation of new codes or themes. This highlights a transition towards the analytical dimension of research. In their study, Starks and Brown Trinidad (2007) introduce the concept of theoretical saturation as the complete and thorough inclusion of constructs found in the data that contribute to the development of the theory.
The operationalization of saturation can only be done efficiently once data collection has taken place, which introduces an additional level of difficulty in establishing sample sizes beforehand (Hammersley, 2015). This issue is made worse by the lack of clear guidelines for achieving saturation, an issue that Morse, Barrett, Mayan, Olson, and Spiers (2002) and Guest (2006) have brought to attention despite its importance in many institutional contexts. Glaser and Strauss (2009) emphasise the value of an iterative process and highlight the importance of theoretical saturation as a key indicator of a well-developed grounded theory. Grounded Theory employs a persistent comparative strategy to achieve saturation of themes and coding (Charmaz, 2006). In qualitative research, the notion of saturation is comprehensive, embracing several aspects such as assessing the sufficiency of data collection and ensuring thorough and conceptually profound analysis. Saturation is an essential aspect of qualitative research, but its implementation varies and is often a subject of disagreement because to the many methods and goals of qualitative research. Although saturation is a crucial notion in qualitative research, it brings up numerous obstacles and complications. Therefore, a more systematic approach, especially in thematic analysis, is required. Extending the use of saturation beyond its initial application in qualitative data analysis is difficult, as it tends to lose its connection to the ongoing process of gathering and analysing data. This issue was emphasised by Kerr et al. (2010). The absence of clear and transparent guidelines for defining and attaining saturation, as pointed out by Bowen (2008) and O’Reilly and Parker (2013), adds more complexity to its implementation, giving rise to uncertainties over its actual significance and the most effective methods to achieve it.
Braun and Clarke (2022) discuss prevalent challenges encountered in theme analysis (TA) and propose a “knowing practice” approach, emphasising the importance of researchers being intentional and self-aware. The authors engage in a discourse regarding the variety of TA methods, emphasising the importance of avoiding methodological inconsistency when combining different approaches. They promote the adoption of an individual's analytical viewpoint, differentiating between descriptive and interpretive Transactional Analysis (TA), and refute the notion of researcher bias in favour of subjectivity as a research instrument. Their commentary aims to guide rigorous TA practice, especially relevant for researchers striving for data saturation, emphasizing the interpretative nature of identifying themes and the importance of reflexivity in the process. This work redirects the attention from grounded theory to a distinct facet of qualitative research, namely thematic analysis. While grounded theory aims to create new theories, thematic analysis can also support this goal but has a broader scope, as mentioned by Naeem et al. (2023). Therefore, the concept of saturation as used in grounded theory may not be directly applicable to thematic analysis. This study focuses on the attainment of saturation in thematic analysis, which refers to the stage where the data has been exhaustively employed to fulfil the research objectives. Although thematic analysis and grounded theory share some similarities, this paper draws from the grounded theory approach to saturation where relevant, to better inform the thematic analysis process.
This approach is echoed by Charmaz (2006) and Sandelowski (1995), who place greater emphasis on the quality of the sample and the advancement of the theory rather than simply its size. Dube et al. (2016) and Boddy (2016) note that this subtlety is often lost when saturation is applied outside of grounded theory, such as in thematic analysis. As Naeem et al., (2023, p.,1) “posit that the thematic analysis process we have constructed exhibits considerable adaptability and potential for utilization in relation to grounded theory (GT), ethnographic approaches, and narrative approaches and, with some adaptation, more descriptive positivist-based methodologies would benefit”. Thematic analysis can be used alongside other methodological approaches. Hence, the focus of this paper is to elucidate the process of reaching saturation within thematic analysis, delineating the point at which researchers may conclude the analytical phase.
The discussion on saturation in qualitative research has been extensively enriched by scholars, predominantly focusing on the data collection phase. Fusch Ph et al., 2015 critically examine data saturation, questioning its adequacy in qualitative analysis. Tran et al. (2017) extends this scrutiny through mathematical models, while Mwita (2022) investigates factors contributing to saturation, offering a comprehensive framework for its operationalization. Nixon and Wild (2008), and Kerr et al. (2010), concentrate on patient-reported outcomes to demonstrate the realization of data saturation in qualitative inquiry. Lowe et al. (2018) propose a quantitative approach to measure when thematic development ceases, LaDonna et al. (2021) venture beyond saturation to probe the rigor in qualitative interviews, and Gugiu et al. (2020) introduce a method for supporting data saturation decisions concerning sample sizes. Alam (2021) employs a methodical qualitative case study to thoroughly analyse the process of collecting data and ensuring saturation.
Wray et al. (2007) explore the concept of ‘researcher saturation' and investigate how data triangulation impacts the research process. Onwuegbuzie (2003) advocates for increasing saturation during data analysis through the use of several analytical methods. The importance of considering several perspectives is emphasised by O’Reilly and Parker (2013) and Hancock et al. (2016), who provide novel approaches for analysing focus group data in order to determine saturation. Notably, while these studies target saturation in data collection, this paper aims to devise a systematic process for achieving saturation during thematic analysis. The scholarly work, including empirical tests by Hennink and Kaiser (2022), and the conceptual and operational investigations by Saunders et al. (2018), together with DiStefano and Yang’s (2023) three-phase ethnographic method, underscores the diversity in saturation approaches. Yet, a structured method is essential for reaching saturation in thematic analysis, ensuring data reach their full potential to meet the research aims. This gap highlights the tendency to concentrate on data saturation during collection rather than adopting a nuanced, systematic approach during thematic analysis to grasp saturation's true essence, where data fully contribute to the research's goals.
Saturation is defined as the point at which the researcher has exhausted all possible uses of the gathered data. This critical stage is marked by a cessation in the emergence of new patterns, codes, and themes, indicating that the data's utility in addressing the research questions has been maximized. The primary objective of this paper is to elucidate a systematic approach to attaining this level of comprehensive data utilization within the thematic analysis, diverging from the conventional focus on saturation during the data collection phase. Instead, it underscores the importance of reaching saturation during the thematic analysis itself, ensuring that the researcher has thoroughly exploited the data's capacity to contribute to the research objectives.
Discussion
This paper developed the PRICE model (Figure 1) which could help researchers to reach the saturation point in thematic analysis. Saturation in this paper refers to the point at which no new keywords, patterns, codes or themes are emerging from the collected data set, indicating that the data have been fully explored, understood and used to their full potential to achieve the required aim of the research. The saturation process in thematic analysis is considered an important concept in the context of the thematic analysis process in this paper. Table 1 describes the major terms of the PRICE model, which will be discussed and synthesised one by one in this section. Perspectivation: In order to ensure the thematic analysis is comprehensive and yields reliable results, this stage involves viewing the data from all possible perspectives, such as research questions, theoretical frameworks and epistemological frameworks. Selecting relevant quotations and keywords to develop the codes that emerge from the data requires thinking and exploration of various perspectives in order to gain a better understanding of the data. This is how the saturation stage of thematic analysis is reached: by refining and expanding upon previously established keywords, quotations, codes and themes. PRICE model to reach saturation in thematic analysis.
Perspectivation
Perspectivation can be used to reach saturation in thematic analysis; perspectivation refers to a researcher’s attempt to take into consideration different perspectives of the research problem, which leads to richer insights. Perspectivation can be done when a problem is considered from a certain point of view, framed linguistically from that point of view, discussed by different participants or captured by the researcher through observation of different events in qualitative data collection. The explicit reporting of different perspectives in the data analysis can reach saturation to ensure the clarity of the analysis.
Selection of Accurate Keywords and Quotes
The selection of quotations and keywords is the foundation of the perspectivation to reach the first element of saturation. Beck (1993) suggested that researchers' goals in selecting quotations were twofold: to display the breadth of language (viscosity-depth) and the connections between terms and concepts, and to show the intensity of feeling, confusion or hesitance. The selection of accurate quotes of what participants have said has become the "gold standard” in qualitative investigations, and there will be a number of such quotations to report different perspectives (Brown, 2010; Guest, et al., 2012). Quotations can bring content to life (White et al., 2014), and a competent application of research ethics avoids the (mis)use of catchy words (Silverman, 2017). Consideration of power quotes and proof quotes might help a researcher to choose appropriate quotes. Power quotations show the reader the most important data points, and these quotes can also be used to describe the results. Power quotes are defined as a “sense of being there, of visualising the [participant], feeling their conflict and emotions” (Ambert et al., 1995, p. 885). Power quotes “are the most compelling bits of data you have, the ones that effectively illustrate your points” (Pratt, 2009, p. 860). Naeem and Wilson (2022) suggested that if some keywords are commonly used by the participants and if these keywords can answer the research question through making a story, then these strong keywords should be selected, and all quotations containing these keywords should be selected at the first stage of the thematic analysis (Naeem & Ozuem, 2021, 2021b).
The concept of saturation has been thoroughly examined in qualitative research, with various studies contributing to a deeper understanding of this key methodological aspect. Guest (2006) study delve into the number of interviews required to reach saturation, emphasizing the significance of data saturation and variability in this context. Fugard and Potts (2015) suggested that one way to reach saturation of findings is to consider whether the sample size is large enough to reach saturation. Others have suggested that the researcher should also consider the data source, and whether the data collected from that source is enough to reach saturation (Galvin, 2015; Guest, 2006; Malterud et al., 2015; Sandelowski, 1995). However, all of these suggestions refer to the data collection process; therefore, there is a need to answer the question of how a researcher can reach saturation of quotations used in analyses; this will be answered in the discussion in next section of this paper.
Frequency of Keywords
Taking different perspectives into account, the use of numerical representations of themes is a common topic of debate when discussing the dissemination of qualitative findings. According to Sandelowski (2001), qualitative researchers tend to fall into one of two camps: “anti-number" or “pro-number". The anti-number camp argue that counting themes runs counter to the goals of qualitative work, which is to engage in an in-depth examination of process and meaning. Proponents of numerical analysis argue that the utilisation of numbers facilitates the determination of the frequency at which a given phenomenon occurs (Morse, 1994). The continuous discourse surrounding this matter often leads to the deliberate omission, underutilization, or disregard of numerical data in qualitative research (Olson, 2000). In spite of the relatively inconsequential nature of numerical analysis in qualitative inquiries, adeptness in this domain is critical for the successful execution of quantitative data-driven research (Sandelowski, 2001).
Galvin's 2015 research in building energy consumption examines the reliability of knowledge produced by qualitative interviews and their role in achieving saturation. Guest et al., (2017) study shifts focus to focus groups, aiming to establish an evidence base for nonprobability sample sizes necessary for saturation. Facts and figures can enrich and elucidate stories in remarkable ways (Olson, 2000). Qualitative researchers face a unique set of challenges when it comes to utilising quantitative methods that yield credible findings. If they want their numerical efforts to matter, they will have to determine when and what to count (DeSantis & Ugarriza, 2000). The manner in which the participants represent the same situation through the use of varied vocabulary and expressions has a substantial effect on the interpretation of the textual significance in qualitative research. Therefore, for researchers to comprehend the terminology utilised and fully understand the numerous implications present in a specific dataset, a wide array of terms and expressions is necessary (Dey, 2003). A quantitative approach is introduced by Lowe et al. (2018) to assess thematic saturation in qualitative data analysis. This method provides a quantifiable tool for identifying the threshold at which saturation is achieved. Furthermore, the analysis of effect sizes in qualitative research conducted by Onwuegbuzie (2003) is highly regarded and significantly contributes to the broader discourse surrounding saturation. Moreover, numbers serve as potent rhetorical devices, enabling one to emphasise the arduousness and intricacy inherent in qualitative research. For example, it would be inappropriate for researchers to offer justifications for the seemingly limited sample sizes employed (Greenhalgh & Taylor, 1997; Sandelowski, 2001).
Recording the Frequency of Data
Argued that counting has merit: it ensures the researcher is being objective in their analysis and provides proof that the events or behaviours being categorised as “emerging" are occurring frequently. In contrast, Guest, et al., (2012) noted that counting is inappropriate if theoretical sampling and emergent design principles are employed and explained that the frequency of words is directly related to the number of times a participant answers a question using the same words or different words that have the same meaning. “If we are strategically asking the question in some interviews and not others as a way of building theory, then frequency [reveals little about the salience of the category]” (Daly, 2007, p. 234).
One method for summarising information is to create a table detailing the number of participants who supported each theme (Frieze, 2013). This current paper suggests that noting the number of times keywords are repeated by participants using the same word or different words with similar meaning would help to reduce the number of quotations required to illustrate keywords (Naeem & Ozuem, 2021, 2021b). The velocity of keywords is the number of times they have been repeated; viscosity is the richness of the language used by participants to explain the same narrative but using different words. Viscosity leads to a richness of the concepts being developed through the use of the different words used by the participants.
Saturation in Coding and Theming
Qualitative data analysis process is repeated until no new themes or patterns emerge from the data, which indicates that saturation has been reached in data collection (Glaser & Strauss, 1967). Another way is memo-writing and coding, which can support topic analysis to develop different perspectives of the data (Denzin & Lincoln, 2011). Memo-writing entails noting crucial observations, insights and codes as they emerge from the data to develop different perspectives and narratives from the data being developed by the researcher throughout the research (Strauss & Corbin, 1997). Questions arise concerning what the researcher should pick from the data and how long the quotation should be to reach saturation. Creswell and Poth (2018) describe that research can take many rounds to find data patterns until no new patterns emerge from the data.
Questions raised by the study of perspectivation concern the function of perspective-sharing in written and spoken communication, as well as the most effective ways in which perspective-taking is practised in various settings (Benbaji, 2004). From here, it is a small leap to analyse referential movements in texts (von Stutterheim, 1989) and interpret them as depictions of shifts in point of view to include different perspectives on the same matter. Answering a question with a coherent piece of data necessitates the researcher to move between different semantic domains, or “domains of reference”, such as people, locations, and times. Different linguistic devices, such as the use of anaphoric elements instead of indefinite noun phrases, and the selection of a particular word order, intonation contour, and so on, all reflect this referential movement (Von, 1989). Jayasinghe et al. (2021) suggest that researchers should consider referential movement in the data from a wide range of participants and should exercise caution when selecting quotes to support their claims. They should avoid citing one or two sources repeatedly, which relates to velocity, and consider different referential movements, which relates to viscosity, to reach perspectivation. If a disproportionate number of the quotes come from a single source, then it could seem that the authors are cherry picking quotes that support their ideas or only quoting a small number of participants whose interviews yielded fruitful quotable material.
Language Structure Perspective
The study of language and its use of perspective, known as “explicit reflexion” in linguistics, (Graumann & Kallmeyer, 2002). So, perspective is deeply embedded in language structure as a result of anthropomorphism (Canisius, 1987); the focus of analytical interest is on the communication of both overt and covert perspectival elements in the language's morphology. This perspective in data analysis would help analysis of individual words and their relationships to other words within a given language; words and their constituents, such as their roots, stems, prefixes and suffixes (Bratlie et al., 2022; Sagarra & Ellis, 2013). Therefore, “perspectivation" in the context of analysis focuses on the ways in which speakers use language to convey their own and other people's points of view, as well as the relationships between these perspectives. Here, we inquire as to how participants make use of their language options to express viewpoints (keywords) that will bring vibrancy to the data analysis, which explores socially constructed reality. Attaining the utmost vividness of the gathered qualitative data during the analysis phase pertains to the stage at which the researcher possesses a thorough and profound comprehension of how words have been employed and is capable of capturing the abundance and intricacy of the data based on the theoretical and epistemological standpoint of the research.
Theoretical Perspective
The importance of the theoretical underpinnings needs to be considered when analysing data to explore the data from a theoretical perspective. In order for readers to appreciate how a different theoretical framework might be applied to the same data and lead to different conclusions (Creswell & Poth, 2018), authors should aim to communicate how and when their interpretive lens is used to collect data, explore, understand, analyse, explain and interpret the data (Neuman, 2018), and “draw forth insights from the data. It may be helpful to qualify our observations to clearly indicate [that they are] inference [s]” (Wolcott, 1990, p. 32). In order to make the synthesis of the findings transparent and transferable, it is crucial that authors incorporate the theoretical ideas that guide the study prior to data collection, as well as during data analysis and presentation of the findings (Morse, 1994). Additionally, the epistemological perspective should also be considered, because, in reality, the sociological and sociolinguistic investigation of social constructionism as a form of everyday communication intersects with the study of social stereotypes (Sacks, 1972b, b). More and more studies on the definition of social or ethnic identities, discrimination and social exclusion (Fiedler & Semin, 1992) give social categorisation a central role in conversation analysis, which leads to considering different perspectives of the same matter due to the different social categories of the participants.
The social typification of individuals and their typical, predictable behavior is of course highly relevant to the definition of habitual perspectives and their representation in communication and interaction, even if this approach is not always theoretically situated within a framework of perspectivism (Nazir et al., 2022). Consideration of how point of view affects texts and the words used by participants has long been recognised as a central concept in the study of literature (Ramezani, 2021). Participants’ words can be considered through the application of literature, theoretical and philosophical lenses. The different narrative and perspective styles that can emerge from adopting a variety of points of view (viscosity) on the same issues improve the vibrancy of the analysis (Van Dijk, 1984). Therefore, it would be right to say that the velocity and viscosity of different perspectives would also improve the vibrancy of the analysis.
Researchers can use perspectivation to gain deep insight into a problem by considering it from multiple angles, which contrasts with traditional, single-perspective approaches. This method involves using multiple perspectives, such as theoretical perspectives, to analyse the data and consideration of how different perspectives may influence the interpretation of the data (Krippendorff, 2004). It is possible to ensure clarity of the analysis to its fullest extent through the explicit reporting of the multiple theoretical and epistemological perspectives involved. Based on this justification, it has been established that the velocity, viscosity and vibrancy of the perspectives can achieve the perspectivation element of thematic analysis. Hence, one might employ a wide range of quotations and utilise them in two primary manners. Some of the quotes and keywords illustrate the different ideas of the time, which can be taken as viscosity, but the number of keywords related to the same perspective can be considered the velocity of perspectivation. How vocabularies (keywords) are related to one another in conversation is related to the vibrancy (see Figure 2). Therefore, the number of times the same vocabulary (keywords) has been repeated and different types of vocabulary used to express the same matter or issue can be considered variance. For example, strong emotions (satisfaction, enjoyment, disappointment, hurt) as well as confusion and hesitancy might be expressed in some quotes, which will be the vibrancy of the data. This linguistic velocity, viscosity and vibrancy is also aligned with social constructionist epistemological perspectives that would lead to the development of a socially constructed reality of the research topic. Consequently, theoretical and philosophical vibrancy would also lead to achieve perspectivation. Perspectivation in qualitative data analysis pertains to the stage where the researcher possesses a profound comprehension of the data and is capable of perceiving it from many viewpoints. This can be achieved using velocity, viscosity and vibrancy in the selection of keywords and codes. Elements of perspectivation
Recapitulation
The main goal of recapitulation is to provide a clear and concise summary in the form of academic concepts. This procedure entails examining the data to formulate ideas based on the detected codes, and arranging the material in a manner that facilitates comprehension and interpretation. A concept is an idea or notion that has been formulated as a result of consideration of the quotations and keywords that were selected from the data; therefore, giving meaning to the data can be considered a concept. Concepts serve as the structural cornerstones to present the research data. A researcher’s ability to comprehend their selected keywords and quotations and to name them as codes and themes is conceptualisation of the data. All objects, ideas and connections are linked back to the data, which should be kept in mind in the form of mental representations. The researcher may also create visual aids, such as matrices, diagrams or concept maps, to help organise and present the data to develop concepts on the basis of real data. Recapitulation is a crucial stage in thematic analysis as it enables the researcher to comprehend the data and present it in a manner that is significant (in terms of concepts) and beneficial for professionals and researchers. It serves as a bridge between the data collection and the final contextualisation and conceptualisation.
The connection of the different perspectives will be discussed in the next part of this paper, which is related to the integration as a progressive story of the research. The attribution of ‘quality' indicators to quotations by participants that do not correspond to the participant’s actual use or reliance on the spoken words is an interesting observation (Beresford et al., 1999; Schlechtweg & Härtl, 2020). Quotations are often cited by qualitative researchers as a means of establishing a study's credibility and finding connection between the argument being made by the participants (Beck, 1993) or validating its findings (Sandelowski, 2003). However, the selection of the number of quotations and keywords is about finding different perspectives on the same code or concept that is being developed from the data (Naeem, et al., 2023; Naeem & Ozuem, 2022), through the velocity and viscosity of the quotations and keywords, which might also align with the research's social constructivist and interpretivist epistemological perspectives. Alasuutari (1995) state that researchers who operate within the paradigms of social constructivism and interpretivism stress the importance of allowing people to speak for themselves; some argue that quotations increase textual diversity (viscosity) to make things more interesting (Holloway & Wheeler, 1996). Consequently, the selection of rich quotations would be a source of richness of the analysis (Eldh et al., 2020). Researchers did not anticipate the use of quotations in this manner and are uncertain about the possible implications of the difference in participant perspectives. Incorporating the opinions of participants is crucial for enriching the research's depth (Gordon, 2020).
As discussed in the previous subsection, perspectivation can help a researcher to get rich insights from the data. Recapitulation is not possible without elucidating all of the connections between concepts, which requires conceptualisation of the codes and themes. Sometimes the process of conceptualisation begins with a direct interpretation of a simple observation that is then pulled apart and reassembled in a more meaningful way (Saunder et al., 2018). Theoretical saturation occurs when no new evidence can improve the quality of the derived theory (Glaser & Strauss, 2009). Philosophical perspectives can help to explore different perspectives. Recapitulation, as a key to saturation, requires the iterative process of thematic analysis, which identifies and refines important concepts through consideration of the developed codes and themes.
Analysing qualitative data is an ongoing cycle that starts while data are still being gathered, rather than after (Hill & Hansen, 1960). Qualitative analysts make notes about what they think the text (field notes or interview transcripts) means and how it relates to other issues (Saunder et al., 2018). They might weigh the benefits of formally adopting a “progressive focusing” approach, which entails progressively narrowing and refining the research focus during fieldwork to account for highly unique and specific issues (emic) of socio-cultural behaviour an analyst’s going back and forth between the data and their interpretation of it is an ongoing process throughout the project (Stake, 2010). When it becomes apparent that more concepts or relationships need to be explored, the analyst modifies the data collection process itself. Progressive focusing describes this method (Sinkovics, 2018), which is helpful to develop new concepts on the basis of the analyst’s theoretical understanding and on the literature. Consequently, a researcher can change the names of the codes and their theories to align the phrases of the codes and themes with the theoretical and philosophical stance of the research.
In the results section, it can be difficult to know whether, how and to what extent the findings should be compared with the findings of previous studies. To avoid detracting from their own work, some authors choose not to reference relevant prior literature in their results sections (Matthews, 2005). Others do reference relevant prior literature, as doing so often provides necessary context for the current discussion and can help to develop new concepts on the basis of the codes and themes (Nelson, 2017); this can be helpful in two ways. So, it can help the researcher to decide on the most powerful and academic phrases for the codes and themes as concepts. Langley (2013) state that there is no agreed-upon method for addressing the problem of saturation of conceptualisation; therefore, authors should decide whether to include prior literature in their results or not to define the codes and themes in general. It is essential for researchers to utilize theoretical frameworks and literature to develop concepts from the data in the form of categories. Subsequently, codes and themes should be revised to establish new concepts based on research findings (Wicker, 1985). However, it is imperative to note that codes and themes must be grounded in the real data (Naeem, et al., 2023; Naeem & Ozuem, 2022). To achieve this, keywords can be used as an additional step in thematic analysis, enabling the development of concepts based on the meanings of the words used by the participants. This approach aligns with the social constructionist epistemological perspective, which supports the development of social reality (Berger & Luckmann, 1967).
Wallas (1926) suggested that there is no harm in using participants’ words to develop concepts, but the researcher needs to be careful about the sense and meaning of the words in different contexts. Some participants were relieved to see the words that they had used to explain a negative experience used to illustrate positive perspectives elsewhere in the research (Smith & Hitt, 2005); therefore, a researcher needs to choose words that can be interpreted in same sense, either positive or negative, that would lead towards conceptual leaps.
The term conceptual leap in qualitative research refers to the conscious development of an abstract theoretical idea within an empirical study. The final shape of such an idea may or may not result in a theoretical contribution (Klag & Langley, 2013). Moving from the mass of words and other data (the world of the field) through and beyond the mechanics of analysis to an abstract and explicit set of concepts, relations and explanations that have meaning and relevance beyond the specific context in which they were developed is what is meant by making a conceptual leap (the world of ideas). Two components of a conceptual leap, seeing and articulating, are frequently intertwined (Van Maanen, 2010). Tsoukas (2009) stressed the importance of being able to capture situated specificity in order to provide an answer to the question “What is going on here?” And yet, this first question is in constant dialogical tension with another, more abstract but important, question, “What is this a case of?”. This requires researchers to link situated particularities with conceptual understandings “extending the radius of application of the concepts at hand, thus helping to make new distinctions” (Tsoukas, 2009, p. 298).
The notion of rigor-by-convention suggests dangers in mechanically and unreflectively applying established methodological norms to situations where they do not apply. This strategy has the potential to impress influential people like editors and reviewers, but it could also foster a “false rigour” (Eisenhardt et al., 2016, p. 1121). “The practices through which researchers construct the meaning of rigour are less well understood. This can obscure important contextual and discipline-specific differences in how rigour is defined” (Bell et al., 2017, p. 535). The excessive amount of rigour is detrimental as it results in the production of knowledge i.e. inconsequential for achieving social effect and the creation of weak, detached theory that fails to sufficiently describe the issue being studied. However, craft-based (i.e., involving tacit skills and imagination), context-specific views of rigour are gaining ground (Reuber & Fischer, 2021). Therefore, contextualisation is taken as a factor of saturation in the analysis of data. In addition, Welch et al. (2022) argue that there is a prevailing idea that management research must be applicable to all conceivable scenarios in order to be considered scientifically valid. The richness of context, which is supposed to be the essence of qualitative research, becomes a hindrance in this situation (Plakoyiannaki et al., 2019). The complexity of the social world (McLaren & Durepos, 2021, p. 74) can be better reflected through recapitulation of the different events in the data. “ Post-colonial and decolonial writers describe contextualization in management research as an agenda that promotes dialogue, understanding, and engagement with marginalized communities that have been historically excluded from mainstream society. This political approach is in contrast to the more conventional approaches taken in mainstream management research (Smith, 2002).
An investigation's rigour is evaluated not by some set of predetermined standards but by its own internal consistency and quality (Van Maanen et al., 2007). Combining technical expertise, originality in design, introspection and openness to criticism results in rigour (Bansal & Corley, 2011). The research process and methods, the research results, and the researcher themselves can all be placed in their respective contexts to achieve rigor-within-context (Czarniawska, 2016). Approaches to context vary greatly depending on the researcher's underlying philosophical beliefs. Non-positivists view context as “a set of relevant connections that we construct as part of the research process to help achieve a research goal”, whereas positivists tend to view context as an already established, fixed container within which to place the phenomenon under study (McLaren & Durepos, 2021, p. 74). Contextualisation creates links to a wide range of discourse events and concepts being indicated that could be merged as a new concept formed on the basis of data (Gumperz, 1982; 1992), including frames of interpretation that may be situated within the text or interactional unit itself. There is no such thing as speech production without contextualisation (Auer, & di LuzioRefstyled, 1992). To put something into context is to “link observations to a set of relevant facts, events, or points of view that make research possible and theory that forms part of a larger whole" (Rousseau & Fried, 2001, p. 1). Recorded, transcribed and coded interviews are often regarded as gold standard research practices, and researchers are expected to rely heavily on in-depth citations from interview transcripts to provide empirical support for their hypotheses (Blikstad-Balas, 2017). Textual analysis of transcripts is a crucial part of theory construction because non-interview data typically play a merely supplementary role (Silverman, 2017). Gephart (2004) suggests that applying contextual lenses, with consideration of the chosen theory and philosophical position, can aid in achieving saturation and developing concepts from the data. The social constructionism position can be particularly useful in comprehending the role of language, norms, culture, and values, while theoretical underpinnings of the research can provide opportunities to conceptualize the data into meaningful and conceptual forms.
Adopting a context can help us understand the world, but it can also subconsciously “bind our interpretation to the context constructed by the dominant social group" (McLaren & Durepos, 2021, p. 78). Its perceptual impact comes from the fact that it reveals the speaker's intended interpretation of his or her words and, by extension, the kind of informational backdrop the speaker has chosen to adopt when giving those words. References within texts and the connections that can be drawn between texts through contextualisation are both examples of intertextuality (Bakhtin, 2010). Decontextualization and recontextualization involve the interpretation of utterances or texts by placing them in new contexts that may not align with the original context. Similar processes are widespread and can be observed not only in the intertextuality of large texts and extensive discourse worlds, but also in the local structuring of interaction sequences to develop the concepts through consideration of the local, theoretical and philosophical context (Bell et al., 2017). It serves as a bridge between the data collection and the final contextualisation and conceptualisation. The ultimate result is a cohesive and lucid synopsis of the discoveries that may be utilised to bolster the overarching study goals.
Although there is no universally applicable rulebook, it is generally accepted that studies should aim for a level of analysis that preserves the background information needed to develop conceptualisation of the research results (Reay & Whetten, 2011). Mees-Buss, Welch, and Piekkari (2020) assert that in order to put research results in context, researchers must go through several iterations of interpretation with interpreters and informants. In order to better contextualise findings, it is helpful to share preliminary results with local informants. This can help lessen the likelihood of missing the significance of social context. Researchers who are given access to the insights and interpretations of local speakers familiar with the area's social norms and values could be considered to develop the concepts from the data (Härtel & O’Connor, 2014). Richness of contextualised knowledge can be generated by appreciating and preserving local frameworks and expressions during the analysis and theorising phase. In light of this, and on the assumption that all other factors remain constant, it is advisable for the qualitative researcher to lean towards employing a wider, more contextually grounded unit of analysis rather than a more precise level of analysis (such as sentences) in order to construct the concept based on the study context. The words we choose carry significance, and the perceptions people develop of us depend on the circumstances in which those words are encountered (Czarniawska, 2016). The significance of keywords is influenced not only by their proximity to other words, but also by their utilisation in established terms and phrases as concepts derived from the research. Rogers (2015) emphasises the need of considering participants' context and the various possible interpretations of the data when constructing a concept. This process involves utilising the actual meanings derived from the data. By acknowledging and examining the different methods of categorising data, a researcher demonstrates an understanding of the intricate nature of their surroundings and enhances their likelihood of deriving profound insights from the investigation.
Integration
Thematic integration analysis enables the construction of a cohesive narrative or the dissemination of a more comprehensive understanding of the study's findings through the connection of distinct themes. Establishing a transparent and verifiable connection between the data and the researcher's deductions regarding each theme, in relation to the corresponding research codes and themes, is the aim of the integration process. In order to present a thorough and inclusive analysis of the research outcomes, it is essential that the themes are coherent and clearly differentiated from each other (Braun & Clarke, 2006). It is imperative to establish a correlation that is both transparent and verifiable between the data and the researcher's deductions regarding each theme, taking into account the relevant codes and themes of the study (Cugno & Thomas, 2016). In order to ascertain the substantiation of all deductions made from the data, referential adequacy can be evaluated by juxtaposing the unprocessed data with the correlations established between the developed themes and codes within the ultimate research framework or model. According to Lincoln and Guba (1985), merely providing a table or a list of themes or the number of participants who endorsed a specific theme is insufficient. The authors bear the onus of demonstrating the interdependence and interaction of these concepts, as illustrated by Elliott et al. (1999, p. 223), through the use of a “data-driven story/narrative, framework, or underlying structure for the phenomenon or domain." This necessitates the integration of previously developed ideas. “DeSantis & Ugarriza, 2000, p. 369] state, "Well-defined themes in particular studies can be compared, contrasted, and utilised as building blocks."
Theorising helps guide the analytical journey from the micro to the macro, in ways that reveal their interconnections and relationships (Burawoy, 2009). However, using pre-existing theory to extrapolate from the microscopic to the macroscopic is a difficult task, especially if the pre-existing theory does not provide cues or insights as to what contexts are germane to understanding and theorising about the focal phenomenon (Nguyen & Tull, 2022). Saldana (2016) says that an analyst of qualitative data typically uses an inductive approach to discover relevant categories, patterns and relationships in the data. Like theme naming, coding involves linking concepts to one another and then to data (Saldana, 2016). One way to summarise the data is with a table showing how many people supported each theme (Frieze, 2013); however, there is also a need to integrate these themes and codes together as a whole story, which is the integration stage required to reach the full saturation of integration of the developed concepts (Burton et al., 1991). Thematic clustering can be used to make this synthesis more manageable, developing a framework as a storytelling method (Diefenbach, 2009). An alternative would be to use a visual representation to summarise the grounded theory or the connections between the most important concepts that emerged from the data (Ganong et al., 2011; Goldberg, 2009). Depending on the findings of the data analysis process, the findings may be presented as a categorisation of the various responses among the participants, or the data could be presented as a conceptual model and descriptive text because the study's aim was to generate a theory about a process.
Crystallisation
The crystallisation stage is defined in this paper as a process in which one must take a break from the analysis process (immersion) in order to reflect on what was learned and articulate the recurring themes and patterns that emerged during the data examination. Improvements in data collection are just one area where crystallisation's insights can be put to use; they are also useful for elaborating ever more nuanced and insightful interpretations. Márquez and Muñoz (2012) imply that researchers may be encouraged by their ability to isolate key themes, but they will need to exercise additional creative synthesis to progress from these building blocks to higher order patterns, interpretations and theory. Understanding the data, or the “lessons learned”, requires pattern recognition as a foundational interpretive skill (Hollenbach et al., 2012). Interpretation involves abstracting out beyond the codes and themes to the larger meaning of the data in qualitative research (Klein et al., 1994), which might be helpful to answer why your research provides a real contribution and where the uniqueness of the study lies. We believe that inexperienced people often find pattern recognition to be difficult, but here the researcher can use new concepts in the light of developed themes to interpret and explain the developed framework.
As mentioned above, the insights from crystallisation can facilitate improvements in the rearrangement of the boxes or shapes and concepts in the final conceptual framework. Additionally, reflection should also be done to justify the connection between different concepts and factors of the final conceptual framework (Cugno & Thomas, 2016). One can crystallise alone or with a group, gaining their feedback on the final developed conceptual framework. Stewart, Gapp, and Harwood (2017) suggested that working in research teams could provide crystallisation benefits, including the enhanced opportunity for dynamic interaction that can lead to the clarification of underlying themes, patterns and interpretations in the form of the final conceptual framework. Furthermore, crystallisation is required to corroborate the findings and to search for alternative interpretations after interpretations have been reached. Participant feedback on crystallisation findings, trends and interpretations can make possible more nuanced interpretations in addition to checks that interpretations and the developed conceptual framework's subjectivity and context are correct.
Edification
Edification is a process used in thematic analysis that focuses on the contributions and insights that research results can provide to improve understanding and knowledge of a research topic. The term originates from the Latin word “aedificationem”, which means construction or building (Gunji, 2017); it was originally used only in a religious context. The word “edifice” means a large and imposing building (Gómez-Chacón, 2015) and it originates from the same root. Edification is used to describe how research results, presented in the form of tables or frameworks, can contribute to the growth and development of individuals, intellectually, morally and spiritually. The process of edification commences by discerning the constituents of a comprehensive theory, as well as their interrelationships and the justifications for their inclusion in the overall growth of the theory. Researchers utilise a proposition or hypothesis as a foundation to develop a theory or model. The constituents of empirical data and their interrelationships are the fundamental elements of a conceptual framework, which can be articulated as propositions. In order to achieve the state of complete edification, the researcher must take into account various variables such as thoroughness, perspective, and the extent to which context is incorporated. This entails addressing inquiries such as: What factors should be considered to present a more comprehensive understanding of the phenomenon under investigation? What is the relationship between the factors? What justifies the selection of this theory as necessary and suitable for the research?
The meaning of edification developed by this paper is about what is being provided in addition to the research itself. The focus of the edification concept is to address the question of how the research results presented in the form of a table or framework contribute to the improvement of the mind and understanding of the research topic. Sujatha and Alva (2020) suggested to make one a better person in some way, whether that be intellectually, ethically or spiritually. Be wary of anything labelled “building up of the soul" (Gómez-Chacón, 2015), as this indicates that it is intended to help one grow in some way, whether that be intellectually, morally or spiritually. In this paper, edification refers to the process of increasing the academic and professional community's familiarity with the research's actual contribution. Hence, the advancement and incorporation of these notions could potentially yield novel perspectives on the subject matter or offer a fresh approach to examining the problem. The researcher needed to justify the contributions made by integrating newly developed concepts based on data analysis. This integration is considered part of the data analysis because it is supported by real data.
Getting to the saturation stage of edification, the components of a complete theory must first be identified, along with their interconnections and the reasons they should be considered in the development of the theory as a whole (Dubin, 1978). Using a proposition or hypothesis as a starting point, inductive researchers construct a theory or model (Fereday & Muir-Cochrane, 2006). The main distinction between propositions and hypotheses is that the former involve concepts, while the latter require measures (Whetten, 1989, p. 491). While it would be unreasonable to expect theorists to take into account every possible boundary constraint, it is still helpful to perform some quick mental tests to see how well central hypotheses generalise (Whetten, 1989). This mental exercise is grounded in the research's keywords, codes, themes and concepts that can be used to establish connections between previously unrelated elements. The concepts that make up real-world data and the connections between them form the building blocks of a conceptual framework, and the framework's relations between concepts can be expressed in the form of propositions (Richards, 2015). According to Gergen (1982), it is essential for experience-based theories to be sensitive to context. Meaning, according to contextualists, they develop organically out of the lived experiences of participants and the researcher alike. Therefore, the researcher's own understanding forms the basis for the relationships among concepts in the conceptual framework, which is used to establish the link between various factors in a propositional fashion. An authoritative source on theory development (Dubin, 1978) states that a full theory must include the three elements described below.
What: elements (variables, constructs, and concepts) ought to be taken into account in order to provide a fuller picture of the phenomena of interest, whether they be collective or personal in nature? What is being explored through using the specific theory and philosophical position? What was missing in the selected theory? What is the addition to the existing theory? The degree to which context is included, which is discussed as contextualisation, can be evaluated using two criteria: comprehensiveness (i.e., are all relevant factors considered?) and perspectivation. Perspectivation helps to explore the rich perspectives of the data that leads to answers regarding the importance of the research in terms of richness and context. What is the worthwhile addition to the state of knowledge? Whetten (1989) explain that experts in the field of reviewing are not always looking for ground-breaking new ideas. Yet, revisions and expansions of existing theories ought to cause substantial shifts in the established views of academics (Corley & Gioia, 2011). Modifications can be scaled up or down depending on how much or little they alter the system and what is the actual alteration in the existing theory.
How: once a set of contributing factors has been identified, the next question for the researcher is how these factors are connected (Corley & Gioia, 2011). Mechanically, this involves drawing lines between the boxes to indicate movement or interaction, which brings structure to the conceptualisation by defining the underlying patterns more precisely. The concept of causality is also typically introduced with the logic in the data. However, the researcher must also explain how these connections exist and how important they are in understanding the data. Why: is it necessary to explain the factors chosen and the proposed causal relationships, as well as the underlying psychological, economic or social dynamics at play (Whetten, 1989)? Assumptions, or the theoretical glue that holds the model together, are based on this justification. Therefore, there is also a need to justify why the selected theory was necessary and appropriate to take as the basis of the developed model. The main issue discussed is why this particular interpretation of the phenomenon should take a specific philosophical stance. In the model's reasoning, the answer can be found through the logic of the connections between factors, and the flow of the story should explain why the story needs to be taken in that specific way. Therefore, Salamzadeh (2020) suggested that all connections must be empirically verified before the model can be used in the classroom; otherwise, it serves little purpose outside of the lab. In order to accomplish this, it is necessary to provide an explanation for the motivations behind the newly reconstructed what’s and how’s.
Conclusion
Saturation is a crucial term that forms the foundation of the theme analysis technique discussed in the paper, since it ensures a comprehensive study of all pertinent material. The PRICE model, consisting of Perspectivation, Recapitulation, Integration, Crystallisation, and Edification, may be beneficial for researchers aiming to achieve saturation throughout the thematic analysis process. The analysis technique is methodically organised following the model, involving the discovery and classification of themes from a particular standpoint. These motifs are then analysed and improved through recapitulation. Through the integration of many data sources, the researcher can discern patterns and correlations, while also establishing links between different themes. By employing crystallisation, the investigator can identify the fundamental themes and crucial findings. Essentially, enlightenment improves the researcher's ability to clearly and fully express and convey the findings. By utilising this framework, scholars may ensure that their analysis is thorough and methodical, incorporating all relevant components. Furthermore, the PRICE model incorporates the process of “crystallisation," which is the systematic arrangement and consolidation of identified themes. This enhances the guarantee that the outcomes are meaningful and uniform. The ultimate phase of “edification" affords the researcher the opportunity to contemplate the findings and convey them to others in a lucid and succinct manner.
As researchers continue to code and analyse data, they will look for patterns and connections between the emerging themes in order to reach saturation. It is also possible that researchers may not reach saturation, especially in the case of large data sets; researchers may choose to desist analysis after a certain number of cases have been analysed to ensure the analysis is manageable and feasible. For the purposes of this paper, “saturation" was understood as the point at which the collected data set has been sufficiently explored and understood to produce the desired result of the research. When researchers code the same theme on numerous occasions, they learn to identify whether the situation or incident directs the research towards further aspects. “If yes, then the incident is coded and compared. If no, the incident is not coded since is only adds bulk to the coded data and nothing to the theory” (Glaser & Strauss, 1967, p. 111).
Researchers can utilise the PRICE model to achieve saturation in their thematic analysis. The methodology it provides is systematic and aids researchers in the process of discerning, classifying, and scrutinising themes within their dataset. It is a distinctive approach to thematic analysis due to its holistic approach to data analysis and utilisation of perspectivation; it can assist researchers in achieving data saturation and provide guidance for the future. The PRICE model, although based on the ongoing discussion on saturation in qualitative research, primarily incorporates principles and theories from grounded theory and other qualitative methodologies, each of which is supported by a unique philosophical standpoint. This research aims to provide a systematic approach to achieving saturation within this specific framework by selectively analysing relevant elements. Within this particular framework, saturation is delineated as the juncture wherein researchers have fully used all potential applications of the data, indicating the conclusion of the data collection procedure. Further inquiries could explore the suitability of the PRICE model for different philosophical frameworks and research methodologies, its effectiveness in strengthening the dependability and precision of qualitative analysis, and its role in improving theme analysis procedures.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethical Statement
Compliance Statement
This study has been ethically approved and is in compliance with ethical standards.
