Abstract
The increasing integration of Artificial Intelligence (AI) in social sciences research is reshaping qualitative methodologies, particularly in those studies that employ the netnography method. Although AI provides improved data processing abilities, it also introduces ethical and methodological concerns about privacy, transparency, authenticity, and possible bias. This paper proposes ethical and methodological frameworks for AI-augmented netnography that prioritize responsible AI use without compromising the interpretive depth and cultural sensitivity foundational to traditional netnography. The frameworks address the complexities of informed consent, data minimization, bias mitigation, and accountability, providing a structured approach to balancing AI’s efficiency with human-led analysis. Using a case study of online activism, this research illustrates the frameworks’ practical application across diverse digital platforms, such as Twitter and Instagram. By combining AI-driven sentiment and pattern recognition with human interpretive oversight, the study captures cultural nuances essential to understanding online social movements. This dual approach highlights AI-augmented netnography’s potential to deliver rigorous, ethically grounded insights into digital communities, promoting more nuanced and inclusive research outcomes. The study contributes to the evolving landscape of digital research by offering actionable, ethically robust frameworks applicable to a broad spectrum of qualitative studies, emphasizing socially responsible research practices in the digital age.
Keywords
Introduction
The integration of Artificial Intelligence (AI) into social science research presents both transformative opportunities and methodological challenges, particularly within qualitative methodologies. Among these, netnography, developed by Kozinets in the late 1990s, offers an immersive, interpretive approach to studying online communities by analyzing cultural significance and social interactions (Atsız et al., 2022). As digital engagement has evolved, netnography has expanded beyond text-based interactions to encompass visual and multimodal content across diverse platforms (Kozinets & Gretzel, 2024).
AI’s ability to process large-scale datasets has enhanced netnographic research by facilitating advanced data processing and pattern recognition (Chubb, 2023; Hitch, 2023). However, its integration also raises ethical and methodological concerns, particularly regarding transparency, authenticity, and interpretive integrity (Kozinets, 2002). While AI enables the analysis of more complex and extensive data than previously possible (Maedche et al., 2019), its use necessitates careful consideration of bias, accountability, and reflexivity to uphold the interpretive depth central to netnographic inquiry. As Kozinets et al. (2010) emphasize, transparency in AI-driven research is fundamental to maintaining participant trust, a cornerstone of ethical digital ethnography. Ensuring that AI methodologies remain methodologically sound, ethically responsible, and contextually aware is critical in preserving the rigor and integrity of netnographic studies.
This paper aims to develop a comprehensive framework for AI-augmented netnography that balances technological advancements with ethical and methodological integrity. Specifically, this paper seeks to address the following research questions: 1. How can AI be integrated into netnography while preserving the principles of transparency and reflexivity? 2. What ethical challenges arise when AI is used to analyze large-scale qualitative data in netnographic research? 3. How can netnographers ensure participant trust and informed consent when utilizing AI in their research?
By answering these questions, this paper proposes a practical framework that enables netnographers to incorporate AI tools ethically while maintaining the integrity of traditional qualitative methods.
In addressing these challenges, the paper merges ethical guidelines with methodological practices for AI-augmented netnography, building on Kozinets’ foundational work and recent advancements in AI. This approach seeks to integrate AI-driven analysis with human interpretive insights, ensuring that the traditional reflexive approach of netnography is not compromised. Although netnography has predominantly been applied in fields such as tourism and consumer behavior (Xiang & Cheah, 2023, 2024), this framework expands its potential application across broader social science research. By prioritizing participant privacy, transparency, and informed consent, this paper proposes a collaborative AI-human approach to enhance netnography while responsibly leveraging AI for large-scale qualitative inquiries.
Literature Review
Evolution of Netnography
Originally developed to study consumer behavior in virtual communities, netnography captures cultural meanings and social interactions in digital environments through immersive, qualitative techniques such as ‘thick description.’ This ‘digital’ ethnographic approach enables researchers to understand the nuanced cultural and contextual dimensions of online interactions in depth (Costello et al., 2017; Wallace et al., 2018). Over time, netnography has evolved significantly, broadening its scope to include multi-platform studies, visual and multimodal content, and increasingly complex digital interactions (Kozinets & Gretzel, 2024; Tavakoli & Mura, 2018).
Today, scholars across diverse fields—including tourism, sociology, marketing, and political science—have adapted netnography to capture social dynamics unique to their domains (Cheah et al., 2024, 2025). Kozinets et al. (2010) highlights this evolution, noting netnography’s adaptability to studies of online identity and consumer behavior as digital research contexts have expanded.
AI in Qualitative Research
The application of AI in qualitative research has grown significantly, encompassing tasks such as automated coding, thematic pattern recognition, and sentiment analysis. AI-driven natural language processing (NLP) and machine learning algorithms enable researchers to identify patterns across extensive datasets, often with greater efficiency than manual methods allow (Anis & French, 2023; Daneshfar et al., 2023).
Building on this, several recent studies demonstrate AI’s role in addressing qualitative research challenges. Below, this paper highlights three key applications where AI has effectively enhanced qualitative methodologies. • Sentiment Analysis in Social Research • Thematic Analysis in Consumer Behavior Research: Del Vecchio et al. (2020) combined AI-based business analytics with netnography to analyze consumer-generated content on social media. Their study used topic modeling algorithms to identify emergent themes in customer feedback, highlighting how AI improves pattern recognition and thematic structuring in qualitative data. • Automated Coding in Health Research
While these advancements illustrate AI’s potential to scale and streamline qualitative research, they also raise concerns about algorithmic bias, misinterpretation of contextual meaning, and ethical data handling. As AI continues to evolve, researchers must balance efficiency with reflexivity, ensuring that AI-augmented insights retain the interpretive depth essential to qualitative methodologies.
Since AI models are trained on existing datasets, they frequently inherit the social biases embedded within these sources, which can lead to distorted or incomplete interpretations (Akter et al., 2021). For example, sentiment analysis algorithms may misinterpret colloquialisms, dialects, or culturally specific expressions, particularly in global, multicultural online spaces. As a result, relying solely on AI for analysis can compromise the interpretive depth and cultural sensitivity that are crucial to the netnographic approach.
Ethical Concerns in AI-Enhanced Research
Ethical concerns surrounding privacy, transparency, and informed consent are becoming increasingly critical as AI is integrated into qualitative research. Traditional netnography relies on transparent engagement with online communities, ensuring that participants are aware of the researcher’s presence and objectives (Cheah et al., 2023; Cheah & Koay, 2022). However, AI-driven data collection often blurs these boundaries, sometimes extracting data without explicit consent, which not only risks infringing on participant privacy but also undermines the trust foundational to ethical social research (Demant & Moretti, 2024).
Addressing these challenges requires ethical frameworks for AI-augmented netnography that establish clear guidelines to safeguard participant privacy, ensure transparency in data usage, and preserve the integrity of data interpretation. Additionally, researchers must recognize and mitigate biases introduced by AI, as algorithmic processes may unintentionally prioritize certain content or perspectives, leading to outcomes that do not fully reflect the diversity of online communities. Therefore, it is essential to implement a framework that upholds transparency and accountability in both data collection and analysis within AI-driven netnographic research.
Failures in ethical AI integration further underscore the urgency of these concerns. For example, sentiment analysis tools trained on biased datasets have misclassified emotional tones in online discussions, particularly when applied across culturally diverse groups (Akter et al., 2021). Similarly, automated data scraping techniques have been criticized for bypassing informed consent, leading to breaches in participant privacy (Demant & Moretti, 2024). These instances underscore the urgent need for rigorous ethical frameworks that specifically address algorithmic bias, ensure transparency, and protect participant autonomy in AI-augmented netnography.
Research Gap
While netnography has evolved to address the complexities of digital environments, a significant gap remains in the ethical and methodological frameworks guiding AI integration. Existing studies tend to focus on either traditional netnographic techniques or AI-driven data analysis, but they lack a unified framework that effectively combines the strengths of both approaches. Specifically, structured guidelines are missing for managing ethical concerns unique to AI applications in netnography, such as algorithmic bias, data authenticity, and participant privacy (Burles & Bally, 2018; Newman et al., 2021).
This gap highlights the need for a comprehensive framework that standardizes ethical AI practices within netnography while also refining methodological processes to ensure the responsible use of AI. By addressing these challenges, researchers can enhance the rigor and ethical accountability of AI-augmented netnographic studies, thereby strengthening their application across various domains in the social sciences.
Proposed Ethical Framework for AI-Augmented Netnography
Proposed Ethical Framework.
Principles of Informed Consent and Privacy
In AI-augmented netnography, the concept of informed consent becomes increasingly complex. Automated collection and analysis of large volumes of digital data can, at times, bypass traditional consent mechanisms (Akter et al., 2021). Unlike face-to-face interactions, where informed consent is explicit and continuous, AI-driven data gathering often occurs in an impersonal digital realm, introducing novel ethical challenges. To address these concerns, this paper proposes the following principles: (1) Digital Consent Notices: Establishing clear digital consent protocols is essential, especially when data is sourced from social media platforms. Where feasible, researchers can implement consent notices or opt-in mechanisms on websites and platforms, informing participants that their public data may be collected and analyzed. Although challenging to implement consistently across some social media sites, these measures enhance alg and help participants understand their role within the research. (2) Data Minimization and Purpose Limitation: Data minimization—collecting only data essential to the research objectives—is a fundamental strategy for preserving participant privacy. By limiting data collection to strictly necessary information, researchers can reduce exposure risks and uphold privacy standards. Purpose limitation further ensures that data will be used exclusively for clearly defined research aims, thereby fostering trust in the research process. (3) Transparency in AI Processes: AI can obscure the traditional researcher-participant dynamic, where direct engagement with participants promotes awareness of the research scope. Transparency regarding AI’s role in data processing—including the documentation of algorithms used, their specific functions, and data storage practices—is crucial to maintaining ethical integrity in AI-augmented netnography. Researchers should include clear disclosures about AI’s involvement within participant information sheets or consent materials whenever possible.
In AI-driven contexts, informed consent becomes a nuanced, ongoing process. Researchers must remain attuned to the evolving nature of digital interactions and continuously update consent protocols to align with the changing dynamics of digital platforms.
Bias and Objectivity in AI Tools
AI tools often introduce inherent biases that can skew qualitative research findings, particularly when they reflect the demographics or ideological leanings of the training data on which they were developed (Drukker et al., 2023). These biases present significant ethical challenges for netnographers seeking accurate and inclusive interpretations of social phenomena.
Indeed, Algorithmic bias remains a major challenge in AI-augmented netnography, often reflecting societal inequities embedded in training data. For example, sentiment analysis tools used in social justice research have misclassified activist discourse as ‘negative’ due to training on predominantly Western linguistic norms, failing to account for cultural variations in language use (Akter et al., 2021). Similarly, AI-powered visual analysis has reinforced racial and gender stereotypes, misrepresenting the diversity of online communities. To mitigate such biases, researchers must adopt corrective measures, including bias audits, use of diverse training datasets, and human oversight to contextualize AI-generated insights.
This paper suggests a few ways to reduce bias using AI tools: (1) Bias Detection and Correction Protocols: Regularly assessing AI outputs for signs of demographic, cultural, or linguistic bias is essential in ensuring objectivity. One approach is to conduct parallel analyses on smaller subsets of data, allowing researchers to identify patterns of AI bias and adjust the analysis accordingly. Where bias is detected, researchers should apply corrective measures, such as refining algorithms or manually re-evaluating biased outputs. (2) Diverse Data Training Sets: Using representative training data sets minimizes the risk of algorithmic bias by ensuring diverse perspectives are included. Researchers should be cautious about the data on which they train AI tools, as demographic imbalances or culturally skewed data sources can lead AI to amplify dominant or majority narratives at the expense of minority voices. Creating or sourcing balanced training data sets is a proactive measure to enhance objectivity. (3) Collaborative Interpretation of AI Outputs: To mitigate biases and enhance accuracy, researchers should combine AI-generated insights with manual analysis. This approach allows human researchers to validate AI findings, using cultural and contextual knowledge to identify biases that might distort interpretations. Collaborative interpretation not only improves objectivity but also maintains the cultural richness that is central to netnography.
Transparency and Accountability
Transparency and accountability are cornerstones of ethical AI-augmented netnography. When researchers are open about the role AI plays in their study, it strengthens the study’s integrity and builds trust with participants, readers, and the broader academic community. (1) Detailed Documentation of AI Processes: Researchers should keep thorough records detailing AI algorithms and processes used in data collection and analysis. This documentation should include descriptions of the algorithms, the rationale behind their selection, and any limitations they may have. Such transparency allows for clearer, more reliable research replication and peer review. (2) AI Audit Trails: Establishing audit trails to log AI interactions with data ensures researchers can monitor AI’s role at every stage of the data process. These logs enable researchers to review AI decisions and outputs, assessing their reliability and alignment with research objectives. Audit trails also provide a means for identifying and addressing errors or biases in AI data handling. (3) Clear Disclosure in Publications: Researchers should explicitly disclose AI’s involvement in their methodology sections, outlining the AI tools, purposes, and limitations. Such disclosure enables readers to critically assess the reliability of AI-driven insights within the broader interpretive framework, ensuring transparency and accountability across the research process.
Through these ethical practices, AI-augmented netnography can adhere to the highest standards of privacy, transparency, and ethical accountability, ensuring both methodological rigor and ethical responsibility.
Broader Ethical Dimensions of AI in Netnography
AI-augmented netnography raises ethical risks beyond bias, transparency, and consent. AI’s pattern recognition may prioritize dominant narratives, sidelining marginalized voices and skewing community representation, while its emphasis on generalizable trends challenges qualitative epistemology, potentially eroding netnography’s contextual richness. Over time, increasing reliance on AI risks shifting digital ethnography toward automation, diminishing human immersion. To counter these, researchers should conduct narrative audits to ensure diverse representation, preventing the overemphasis of dominant voices, and ground AI insights in human analysis to preserve qualitative principles. Documenting epistemological tensions in reflexive journals fosters critical reflection, while periodically reassessing AI’s role maintains a balance between automation and methodological integrity.
Methodological Framework for AI-Augmented Netnography
Integrating AI with Human Researcher Insights
AI’s strength in processing large datasets offers netnography an efficient means to identify patterns, but human interpretive insight remains essential for cultural and contextual depth. To integrate AI effectively, researchers can adopt a structured collaborative approach: (1) (2) ◦ ◦ ◦ (3)
This approach leverages AI’s efficiency while grounding results in human expertise, ensuring robust, culturally sensitive insights.
Data Collection and Analysis Protocols
Structured protocols ensure methodological soundness in AI-augmented netnography, guiding tool selection and data analysis: (1) (2) ◦ ◦ (3)
These protocols provide a practical roadmap for integrating AI, balancing technological precision with qualitative rigor.
Reflexivity in AI-Driven Netnography
Reflexivity is essential for recognizing AI’s influence on research findings, fostering a critical awareness of how AI shapes qualitative insights. Reflexive practices ensure that researchers maintain ethical and interpretive rigor (Oliphant & Bennett, 2019) throughout AI-augmented netnographic studies. (1) Ethical Journaling: Ethical journaling involves documenting reflections on AI’s role within the research process. For instance, researchers can note moments when AI limitations impacted data interpretation, fostering transparency and continuous ethical engagement. (2) Continuous Reflexive Feedback Loops: Incorporating feedback loops between AI-generated outputs and human interpretive insights encourages researchers to re-evaluate AI’s impact at various stages. This iterative reflection allows researchers to refine AI applications in response to emergent findings or methodological challenges, maintaining alignment with ethical and interpretive goals. (3) Acknowledging AI’s Role in Publications: Finally, researchers should openly acknowledge AI’s limitations and influence within their publications. Transparent disclosure about AI’s role in data interpretation enables readers to critically assess AI-driven insights, fostering accountability and ethical integrity in AI-augmented netnography.
By establishing a structured methodological framework that integrates AI while respecting netnography’s interpretive depth, researchers can leverage AI’s advantages responsibly, maintaining both ethical and methodological rigor.
Case Study: Applying AI-Augmented Netnography to Online Activism Research
To illustrate the proposed ethical and methodological frameworks, this case study explores a hypothetical application of AI-augmented netnography in analyzing online activism within digital communities. The focus is on social justice movements in the digital sphere, specifically examining discourse around issues like climate change, racial equity, and economic justice. Platforms such as X (formally known as Twitter), Facebook, and Instagram are chosen due to their varied digital formats and widespread use in activism. This example demonstrates how AI can efficiently manage and analyze large datasets, while human oversight preserves cultural and contextual nuances crucial to netnographic research.
Research Context and Objectives
Online activism has become a key avenue for social movements to raise awareness, build communities, and mobilize support for social issues. In recent years, platforms like X, Facebook, and Instagram have become hubs for organizing campaigns, sharing resources, and expressing solidarity. This study’s objectives are twofold: (1) to understand the digital narratives and strategies used by activists to promote social justice and (2) to identify recurring themes, sentiments, and patterns in online discussions about climate action, racial equality, and economic justice.
This approach seeks to answer questions such as: - How do activists use social media platforms differently to promote their causes? - What themes, sentiments, and calls to action emerge in online activism discourse? - How do AI-driven patterns align with human interpretations of the cultural and contextual nuances of activism?
Data Collection
Using AI-augmented netnography, the case study implements a structured approach to data collection, with specific focus on ethical practices, transparency, and representational balance. (1) Platform Selection: Each platform chosen provides unique insights into online activism. X is valuable for real-time, text-based interactions; Facebook facilitates community discussions and long-form posts; and Instagram combines visual and textual content, offering a multi-modal perspective on activism. By examining multiple platforms, the study captures a holistic view of digital activism. (2) Automated Data Retrieval: AI-driven data scraping techniques are employed to collect posts tagged with relevant hashtags and keywords, such as #ClimateJustice, #LBGTQ, and #EconomicEquity. These tags serve as entry points into online activism communities. AI scrapes and aggregates posts containing these terms, capturing a broad spectrum of public discourse across platforms. (3) Data Filtering and Categorization: AI algorithms categorize data based on sentiment, engagement metrics (likes, shares, comments), and topical relevance. Through natural language processing (NLP), posts are grouped by themes, allowing researchers to quickly identify dominant narratives. Filters for language and demographic markers ensure the sample represents a diverse set of voices and perspectives within the activist community.
In line with ethical guidelines, a consent disclaimer is posted where possible, informing participants that publicly available data may be used for research purposes. Additionally, data minimization practices are enforced to collect only relevant data, minimizing participant exposure while retaining analytical value.
Data Analysis
The data analysis phase demonstrates the balance between AI-driven pattern recognition and human interpretive insight, ensuring both depth and rigor in understanding online activism. (1) Sentiment Analysis: AI-driven sentiment analysis categorized posts by emotional tone— Complementing this analysis, (2) Pattern Identification and Thematic Coding: AI identifies recurrent patterns and themes, such as calls for policy change, community solidarity, or resource-sharing posts. Topic modeling algorithms group these themes, and frequency analysis highlights the most discussed issues, providing a quantitative overview of digital activism discourse. (3) Interpretative Refinement: Human researchers engage in a structured validation process to refine AI-driven outputs, ensuring interpretive accuracy and mitigating bias. This process includes: • Parallel Manual Coding • Cross-Platform Thematic Consistency: AI-identified patterns are compared across different social media platforms (Twitter, Instagram, Facebook) to confirm narrative coherence and detect AI biases in specific contexts. • Expert Panel Review • Audit Trails & Reflexive Journaling Sentiment Analysis of Online Activism Posts. (Author created image). Thematic Coding Distribution: Narrative and Emotional Evolution. (Author created image).


Ethical Considerations
Throughout this study, ethical guidelines are rigorously upheld, with a strong emphasis on privacy and informed consent protocols, particularly given the sensitive nature of social justice topics. Where possible, transparency notices are implemented to inform users of data usage, and data minimization practices are employed to reduce the exposure of personal information.
This case study illustrates the practical application of the proposed frameworks, showcasing how AI’s capacity to handle large volumes of digital data can be effectively combined with the cultural depth and ethical accountability that human oversight provides. By adopting a balanced approach, this study demonstrates that AI-augmented netnography can yield meaningful insights while adhering to high ethical research standards.
Discussion
Addressing Methodological Challenges and Innovations
The integration of AI into netnography introduces both challenges and methodological innovations. One primary challenge is balancing AI’s efficiency in handling large datasets with netnography’s foundational principle of cultural immersion. While AI enables researchers to process vast quantities of data quickly, it risks diluting interpretive depth if not properly contextualized by human insights. The proposed framework mitigates this risk by embedding AI within a human-led analysis structure, where AI identifies initial patterns and human researchers conduct interpretive refinement.
Additionally, the structured protocols for data collection and validation presented here represent methodological innovations that enhance netnography’s adaptability to large-scale social phenomena. By defining criteria for platform selection, data minimization, and layered analysis techniques, researchers can systematically address both ethical and methodological concerns. This dual framework not only respects the cultural richness of qualitative research but also demonstrates AI’s potential to enhance research rigor when applied responsibly.
Ethical Implications for Social Science Research
The ethical implications of AI-augmented netnography extend beyond bias, transparency, and consent to encompass AI’s capacity to shape narratives, influence epistemological frameworks, and alter the trajectory of digital ethnography. AI’s tendency to highlight dominant patterns may inadvertently construct narratives that overshadow marginalized voices, necessitating proactive measures to ensure inclusivity. Furthermore, the shift toward AI-driven analysis challenges the epistemological roots of qualitative research, potentially prioritizing efficiency over interpretive depth. Over the long term, these dynamics could redefine digital ethnography, raising concerns about the erosion of human immersion in favor of automated processes. By addressing these risks through reflexive oversight and ethical guidelines, researchers can harness AI’s potential while safeguarding the cultural and methodological integrity of netnography.
Expanding AI-Augmented Netnography in Social Sciences
The potential applications of AI-augmented netnography extend well beyond activism studies, offering valuable insights across various fields within the social sciences. This framework can be adapted to analyze consumer behavior in marketing, assess public sentiment in health research, and investigate political discourse within political science. For instance, in healthcare research, AI-augmented netnography could be used to examine patient communities, providing insights into public health perceptions, or to analyze mental health support forums for emerging trends and narratives.
Furthermore, the interdisciplinary nature of AI-augmented netnography encourages collaborations across computational sciences, sociology, and ethics, expanding its utility for complex, multifaceted research inquiries. This adaptability not only enhances methodological rigor but also enables researchers to explore complex social phenomena with culturally nuanced insights. Future research should prioritize further refinement of these frameworks, particularly through comparative studies that examine AI’s adaptability across diverse cultural contexts and digital platforms.
Limitations and Future Directions
While AI-augmented netnography offers significant methodological advantages, it introduces several limitations that warrant careful consideration, with implications for its long-term role in digital ethnography. First, AI’s capacity to interpret the cultural depth inherent to netnographic studies remains constrained, as pattern-based algorithms may overlook subtle or culturally specific meanings, potentially distancing netnography from its qualitative roots. Similarly, AI’s narrative-shaping power can skew findings toward dominant discourses, amplifying prevalent voices while marginalizing less prominent perspectives, which challenges the inclusivity central to ethnographic inquiry. Furthermore, biases embedded within AI models—reflecting their training data—may distort results, particularly in research exploring sensitive social issues, underscoring the need for ongoing vigilance.
Beyond these interpretive and ethical challenges, the sustainability of AI-augmented netnography raises additional concerns. The environmental impact of AI development, training, and deployment is substantial, as large-scale models—especially those involving natural language processing (NLP) and deep learning—require extensive computational resources, leading to high energy consumption and carbon emissions (Ahmad et al., 2021). This ecological footprint prompts ethical questions about resource allocation, energy efficiency, and responsible AI use in academia. Moreover, the increasing reliance on AI could threaten the sustainability of human-led methodologies, potentially shifting digital ethnography toward automation at the expense of researcher immersion and cultural sensitivity.
Future research should address these multifaceted limitations through a dual focus on methodological refinement and ethical responsibility. Longitudinal studies tracking AI’s impact on narrative representation, epistemological coherence, and cultural interpretation could illuminate its long-term effects on netnography’s qualitative foundations. Simultaneously, exploring sustainable AI practices—such as low-resource models, optimized data retrieval, and energy-efficient computing (Alsadie, 2024)—can mitigate environmental concerns, ensuring alignment with broader sustainability goals. Collaborations between social scientists and AI ethicists could further drive the development of adaptive ethical frameworks and greener AI solutions, balancing efficiency with ecological and methodological integrity. By pursuing these directions, researchers can ensure that AI-augmented netnography remains a robust, culturally sensitive, and environmentally conscious methodology in the evolving digital landscape.
Conclusion
In response to the growing use of AI in social science research, this paper proposes ethical and methodological frameworks for AI-augmented netnography, offering guidelines that address privacy, bias, and interpretive depth. This approach demonstrates that, while AI introduces challenges around transparency and cultural sensitivity, these can be mitigated through structured protocols and human oversight. By combining AI’s capacity for large-scale data processing with netnography’s emphasis on cultural immersion, researchers can achieve a balanced approach that upholds both ethical and methodological standards. The case study on online activism illustrates practical applications, showing how AI-augmented netnography can yield valuable insights into complex digital spaces while remaining socially responsible and methodologically rigorous.
Looking ahead, future research should test and refine these frameworks across specific empirical and methodological domains. Empirically, studies could explore AI-augmented netnography’s effectiveness in diverse cultural contexts, such as analyzing online communities in non-Western digital spaces (e.g., WeChat in China or ShareChat in India) to assess its adaptability to varying linguistic and social norms. Methodologically, experiments with emerging AI tools—such as generative models (e.g., GPT variants) for narrative synthesis or multimodal AI for integrated text-image analysis—could push the boundaries of netnographic inquiry, evaluating their impact on interpretive depth. Comparative studies across disciplines, such as applying the framework to health narratives versus political discourse, would further reveal its versatility and limitations. These investigations should also examine AI’s long-term influence on researcher-participant dynamics and ecological sustainability, ensuring that netnography evolves as a robust, ethical methodology in the rapidly changing digital landscape.
Glossary
Definition: Systematic errors in AI outputs caused by prejudiced assumptions or imbalances in training data, leading to unfair or inaccurate results.
Definition: The use of AI tools to categorize and label qualitative data, such as text or images, based on predefined or learned patterns.
Definition: A principle of collecting only the data necessary for a specific research purpose to reduce privacy risks and unnecessary data exposure.
Definition: Notifications or opt-in mechanisms informing online users that their publicly available data may be collected and analyzed for research purposes.
Definition: A research approach combining AI’s data processing capabilities with human interpretive skills to enhance analysis while preserving contextual depth.
Definition: A measure of agreement among multiple researchers coding the same data, used to ensure consistency and reliability in qualitative analysis.
Definition: A branch of AI that enables computers to analyze, understand, and generate human language, often used for text analysis in research.
Definition: The AI-driven process of identifying trends, themes, or structures within datasets, such as recurring topics in social media posts.
Definition: A principle ensuring that collected data is used only for the specific research objectives outlined, preventing misuse or overreach.
Definition: A practice where researchers document their reflections on the research process, including AI’s influence, to enhance transparency and critical awareness.
Definition: An AI technique that categorizes text based on emotional tone (e.g., positive, negative, neutral) to understand attitudes or opinions.
Definition: A qualitative method for identifying, analyzing, and reporting recurring themes or patterns within data, often aided by AI tools.
Definition: An AI method that automatically identifies and groups recurring topics or themes within large collections of text data.
Footnotes
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research is part of the Guangdong Province Innovation Research Grant (Project Code 2023WTSCX108).
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
