Abstract
There is an ongoing debate around how to design online synchronous qualitative research studies, and respond in the moment, when researchers suspect that they are engaging with ‘impostor’ or ‘fraudulent’ participants. Initial literature framed ineligible participants as a threat to data quality and the integrity of the research itself, calling for reactionary approaches to potential participants. This paper contributes to the growing literature cautioning that strict screening approaches may negatively harm genuine participants and undermine inclusion efforts. This paper explores the concept of ‘knowing’ research participants in qualitative research, focusing on methods that enhance how we genuinely come to know the participants we seek to include, particularly in reclaiming interactions that may have become curtailed during online research. Through consideration of researchers’ ethical responsibilities in relation to what is presumed or learned, we offer methodological reflections on how researchers’ skilful attention to the research encounter may be all that is required to ensure continued research integrity within the context of inauthentic participants. Taking actions to better know participants upholds our ethical responsibilities to them and also has the effect of identifying inauthentic participants who intentionally falsify their accounts.
Keywords
Introduction
Virtually mediated data collection, or data collection which takes place through online communication between researchers and participants using platforms such as video conferencing software, is now widespread in qualitative research. Benefits of virtually mediated data collection include increased accessibility for potential participants (Dubé et al., 2023; Hewitt et al., 2022), ability to target specific groups (Dubé et al., 2023; Hewitt et al., 2022), and increased efficiency of researcher time and resources (Dubé et al., 2023; Hewitt et al., 2022; Owens, 2022). Challenges include excluding participants who do not have or cannot use the technology required (Dubé et al., 2023; Pullen Sansfaçon et al., 2024), distractions in the participant’s environment, reduced awareness of non-verbal feedback, and privacy concerns related to the participant’s environment or how data is stored by the video conferencing platform (Dubé et al., 2023). Earlier literature on online mediated research raised concerns around minimising risk or protecting participants from harm (Convery & Cox, 2012; Keller & Lee, 2003; Gupta, 2017; see also Dubé et al., 2023) and maintaining commitment to the process of informed consent (Convery & Cox, 2012; Gupta, 2017; Keller & Lee, 2003), particularly when researchers cannot verify whether a participant: is the required age (Keller & Lee, 2003); has read and understood the study details; or, whether the person giving the consent is the participant (Vehovar & Manfreda, 2008 cited in Gupta, 2017). While the benefits and challenges of collecting data online were well documented prior to the Covid-19 pandemic, these methods rapidly became normalised in 2020.
Qualitative researchers, however, are reporting unusual research enquiries characterised by ‘bot-like’ initial email contact, and data collection experiences which research teams determine to contain false information about who the participant claims to be or the experiences they report having. Although the scale of the issue is unknown, the rapid increase in publications since 2022 which have discussed falsified or fabricated responses from participants in synchronous online research indicates that the experience is becoming increasingly common (see Hustead et al., 2025 for a systematic review of early publications). This has led to a deeper questioning of how participants engage in qualitative research and, for many researchers, disrupted comfort in our professional practices.
Drivers of the Rise of Inauthentic Research Encounters
The shift to virtually mediated data collection is situated within global socio-economic inequity. The potential financial gain achieved through participation in research projects, where it is often common practice to provide financial compensation for a participant’s time in the form of vouchers or cash payments, is believed to contribute to an increase in ineligible participants sharing inauthentic experiences (Drysdale et al., 2023; Owens, 2022; Pellicano et al., 2023; Pullen Sansfaçon et al., 2024; Sefcik et al., 2023). Ineligible participants may be in the country where the study takes place or from countries where compensation converts to larger amounts in local currency (see Roehl & Harland, 2022, p. 2471).
This has occurred in concert with rapid development of publicly available Generative Artificial Intelligence platforms (GAI). Such technology, particularly large language models, may help individuals who set up ‘bot’ systems engage with a high number of research studies, contribute to the same research study multiple times, and create natural sounding emails and content for interviews or focus groups.
Flags for Inauthentic Encounters
While no single characteristic can identify an inauthentic participant (O’Donnell et al., 2023), the published literature has particularly focused on ‘red-flags’ in participants’ initial communications (See Hewitt et al., 2022; Sefcik et al., 2023; Peach et al., 2023; Pellicano et al., 2023; Ridge et al., 2023; O’Donnell et al., 2023; Owens, 2022). Potential participants may provide researchers with fictional or non-functioning phone numbers or addresses (Owens, 2022; O’Donnell et al., 2023), be direct about asking what compensation or gift cards will be offered or when they will receive them (Owens, 2022; Pellicano et al., 2023; Ridge et al., 2023), or display unusual eagerness or flexibility to take part in the study (Pellicano et al., 2023).
The urgency to participate is notably unexpected for research projects aiming to build trust with specific segments of the population or to engage participants around sensitive or stigmatised topics. Even considering strategic recruitment targeting, the identities or group membership hinted at in the email communication we received would have greatly expanded who had been successfully recruited up to that point. Although there are many factors affecting participation, a key feature distinguishing inauthentic encounters is participant accounts that significantly change or counter what is known or understood about the research topic (Pellicano et al., 2023).
Approaches to Framing the Issue
There are several ways of characterising the ‘problem’ of inauthentic participants. Some articles draw on criminalising language, referring to participants as ‘fraudulent’ (Hewitt et al., 2022; O’Donnell et al., 2023; Sefcik et al., 2023), ‘false’ (Peach et al., 2023) or ‘scammer’ (Pellicano et al., 2023). Others use ‘imposter participant’ (Ridge et al., 2023; Roehl & Harland, 2022; Sharma et al., 2024), which recognises an actor’s dishonesty around their eligibility for a study or in the experiences they share, while avoiding the criminal implications of the term ‘fraudulent’ (Roehl & Harland, 2022). Jones et al. place emphasis on eligibility criteria and distinguish between ineligible and fraudulent by describing the former as ‘well intentioned’ and the latter as those who ‘complete study tasks inappropriately’ (2021, p. 2). Garcia-Iglesias et al. (2025) refer to ‘suspected participants’ to emphasise researchers’ responses, while Owens (2022) focuses on the ‘implausible’ nature of the data generated. Recent guidance on recognising and responding to ineligible public involvement in health and care research stresses that researchers should avoid prejudging a participant’s intentions and handle the situation with sensitivity, equity, and respect (Health and Care Research Wales et al., 2024, p. 3). The language used plays an important role in the debate about how to address the issue (Lakoff & Johnson, 1980). Recognising that there is no perfect term, we will use the terms ineligible or inauthentic participant and fabricated or inauthentic account throughout this paper.
Responses to the Concern for Research Integrity
In much of the published literature to date, ineligible participants are positioned as a threat to research integrity (Hewitt et al., 2022; McLachlan et al., 2024; O’Donnell et al., 2023; Ridge et al., 2023), potentially affecting how the research can influence policy or professional practice (O’Donnell et al., 2023). Some also consider how it impacts other participants in the research setting (O’Donnell et al., 2023; Sharma et al., 2024). These concerns have prompted calls for intensified screening approaches, including procedural screening solutions such as requiring participants to produce formal identification (see McLachlan et al., 2024) and technical solutions which flag up suspicious qualities such as a mismatch between IP address and participants’ stated location (Sharma et al., 2024).
An emerging body of literature cautions that stringent screening processes may threaten participant privacy (Garcia-Iglesias et al., 2025; Roehl & Harland, 2022), create additional barriers to engagement (Garcia-Iglesias et al., 2025; Pellicano et al., 2023), exacerbate mistrust in researchers and the research process (Drysdale et al., 2023; Garcia-Iglesias et al., 2025; Pullen Sansfaçon et al., 2024), and undermine inclusion efforts (Drysdale et al., 2023; Garcia-Iglesias et al., 2025). Ridge et al. (2023) and Drysdale et al. (2023) discuss the additional labour involved in identifying ineligible participants with fabricated accounts. Gibson and Beattie highlight that this emotional labour may lead to or intensify pressure on researchers regarding funding and unrecognised labour (2024).
Our Response
In this paper we build on the rich methodological history of qualitative research, particularly the relationship between knowledge, trust, and doubt. Conducting research online has led to a shift in researchers questioning whether they can trust participants rather than focusing solely on building participant trust in the researcher (Teitcher et al., 2015 cited in Drysdale et al., 2023, p. 376). Having doubts about participants is not new in qualitative research. The high frequency of doubting multiple participants within the same study, however, has changed dynamics around trust. Building on our experiences of emotional and ethical tensions in response to this change, we explore whether and how research integrity may be threatened or ensured by researcher responses. We discuss the task of ‘knowing’ people, who may be ineligible participants fabricating their responses through synchronous online encounters. We will focus solely on synchronous data collection methods, such as interviews, focus groups and public engagement, due to the relationship-building elements crucial to these contexts. For discussions on ineligible participants in asynchronous data collection see Jones et al. (2021).
Knowing Within the Research Relationship
As detailed above, the advent of online mediated research, which is beholden to the same ethical standards as face-to-face research, raised concerns over how to attend to responsibilities to participant safety from afar and ensure informed consent when less may be known about the participant. Knowing with whom one is interacting, and co-constructing a research experience (DeVault & Gross, 2014; Holstein & Gubrium, 2020; Sergi & Hallin, 2011), is important when undertaking qualitative research. Qualitative interview and focus group research is epistemologically founded on the concept of ‘knowing’ as derived from a relational experience between researcher and participant. In that interpersonal relationship with participants, and in relationship to the research itself, researchers are responsible for ethical conduct that begins at the initial theorising and planning stages and continues through to research dissemination.
We approach the concept of ‘knowing’ as a combination of methodologies (theoretical frameworks), methods as enacted (research practices), embodied experiences that result from research practices, and reflections on the nature of the knowledge that has been constructed as a result of particular identities and situations, to inform analysis and representation. First, the specific methodologies, which are informed by our motivating ethical principles, require us during planning phases to consider what we know and need to know about participants we aim to involve in the research. Second, we employ our chosen research methods, which indicate what we view as ethical practices, to attend to our part in constructing the research dynamic and to attune to the participant during the research interaction, for the purpose of knowing participants. This creates the conditions for the research knowing and requires reflection prior to, during, and after the interview. Third, through our embodied experience of the interview we come to know the ‘self’ presented by the participant, including their mannerisms, voice intonation and accent, facial expressions, and gestures to name a few. We also know what emotions and topics they felt comfortable sharing. Fourth, reflections on the nature of the knowledge developed—including knowledge about the participant encountered, performed and co-constructed in the interview—are both required for and further develops during analysis and representational or dissemination stages. How we use that knowing of the other, alongside what has been recorded to inform the analytical process must be in conversation with reflections on our role throughout the research stages.
Our concept of ‘knowing’ as applied to research participants is a process. The process moves from theoretical conceptualisation, through a lived experience, on to a theoretical consideration of what has been developed, shared, and is now known about the participant. Details of these components of ‘knowing’ research participants are explored further below.
Approach for Developing Methodological Reflections
In early 2023, one of the authors asked on social media whether other qualitative researchers had experiences with ineligible participants fabricating their accounts. This initial post resulted in a group of thirty researchers from across Europe, North America and Oceania coming together to discuss their experiences with ineligible participants. This author coordinated the group through email and then, in recognition of the international range of time zones, created a shared word processing document which invited recipients to record their experiences, key issues they wished to explore within the group, and literature they found helpful for making sense of the experience.
Through this document, researchers shared reflections from recent qualitative studies in which they encountered high levels of bot-like communication during recruitment, such as multiple similar emails received simultaneously, or had later engaged with ineligible participants sharing inauthentic or fabricated accounts. Though many researchers had previous experiences conducting participant recruitment online and through social media, we noticed stark changes in the type of participant responses encountered in recent research projects, spurring reflections and conversations within internal research teams and then across our wider research community.
Following this initial response period, online meetings and email communication led the wider group to organise into three related workstreams. This article is the result of the workstream on research trust and ethics. Members of this workstream self-selected to participate based on their discomfort around stricter screening practices and a desire for potentially more inclusive responses to the emergence of ineligible participants in our work. A group of ten researchers communicated through email and in meetings. We cared about inclusion and shared a reluctance around increased screening before considering a range of responses to the phenomenon of ineligible participants with fabricated accounts.
During group discussions, we shared common experiences, challenges, and questions, and developed themes to characterise our emotional and ethical responses. We explored how these experiences prompted us to question our skills and values as researchers when feeling increasingly sceptical about potential participants and some of the accounts we heard. We questioned what this meant for our ability to empathise with participants, and the potential impact on participants, the research dynamic, and qualitative research projects if we acted unreflexively on the scepticism. We finally considered how we might broach discussion of these themes and reflections with the wider qualitative research community.
We approached synthesis of our reflections as follows. • All authors read and reflected on wider group experiences captured in the initial shared word processing document to draw out central themes. • We held a series of collective methodology workshops where we discussed the central themes and shared more about our experiences. • One author refined the themes based on these discussions. • Four authors developed these refined themes into the core themes of the paper, interpreting discussions in light of the wider literature.
Some of the encounters included interactions with bot systems which may be used by individuals who intend to fabricate their accounts in order to access the greatest number of interview opportunities. It is not always possible to know the difference between ‘bots’, ‘bot-like’ and human communication at the early interest phase of research.
Methodological Reflections: ‘Knowing’ Participants
Similar to others who have published on this issue, concerns were raised during recruitment upon sensing some ‘red flags’ in communication with people who presented as eligible to participate. Initial excitement over recruiting participants who claimed to represent groups that were not yet well represented in the data quickly waned as we began to wonder why recruitment techniques had suddenly been so effective.
We became aware of the need to identify genuine participants yet grappled with the contradictions of being inclusive and welcoming of all interested research participants while also gatekeeping eligibility. Decisions on whether or not to confer trust on participants at the outset of interactions has long been debated (Reinharz, 1992). Researchers respond to doubts about participants’ authenticity in ways that are intimately influenced by their motivations for undertaking the study and how they perceive the research interaction and the concept of ‘truth’ (see Flicker, 2004; Reinharz, 1992). For the authors, it was uncomfortable to mistrust potential and confirmed participants. We questioned the extent to which our biases led to doubts about participants and reflected on what that meant for us in the context of trying to remove barriers for participants who may not feel included and supported, with legacies of mistrust in research and researchers (See Scharff et al., 2010).
Identifying initial enquiries from ineligible participants triggered a ‘threat’ response for many of us. This reaction is visible in some of the existing literature, which places ineligible participants as a threat to research integrity, as noted above. Various professional ethical guidelines emphasise the value of accurate and truthful findings that can be used for the greatest public benefit. Research integrity towards this end was a concern that many of us had when considering inauthentic participants. We held broader concerns around the ethical treatment of participants during research processes.
As we explored potential responses within our projects, it was clear that choosing which action to take would be fraught with methodological implications. Our experiences uncovered ethical tensions around ‘protecting’ a study’s integrity from the perceived threat to quality, the certainty that the study findings would reflect genuine lived experiences, and knowing that we had not excluded participants with relevant lived experiences as a result of our cultural, linguistic, racial, ableist, or neurotypical biases. Recognising our ethical responsibilities to protect the psychological, social and physical wellbeing of research participants (see Association of Social Anthropologists of the UK and the Commonwealth, 1999; British Sociological Association, 2017), how we would respond to doubts about participant eligibility that surfaced prior to and during interviews became an important consideration. We align with the principle that protection of the participant is paramount when there is a conflict between protecting the participant and protecting the research (Association of Social Anthropologists of the UK and the Commonwealth, 1999, Principle 1.1), and argue that maintaining the dignity of all participants in the face of doubts over their authenticity is required.
Discussing and considering these issues was significant for researchers intent on improving access and inclusion in research. Many of us found that all of the individual ‘red flags’ we identified may be explained on their own, particularly when scrutinising one’s biases. A group of emails received at the same time, for instance, may be a result of several individuals interested in participating emailing from the same event, such as during a youth group or support group. The seemingly ‘terse’ and informal writing within the emails may have been used by individuals with a range of language (e.g. dyslexia, low literacy), cultural (e.g. English as an additional language, cultural norms), or experiential (e.g. having not received guidance on formal email communication) differences, or people with limited time or access to the internet, mobile phones, or computers. In this sense, ‘knowing’ participants involved us reflexively questioning our own biases and preconceptions – in other words, we understood ourselves better through examination of our initial judgements.
Grounded in a commitment to ethical and inclusive qualitative research, we argue that this disruptive moment has created an opportunity to reflect on the work of qualitative research across projects, disciplines, and countries; it is an opportunity to revisit old methodological debates and challenges to envision a future for qualitative research in the context of continued virtually mediated interactions and generative artificial intelligence tools. Shifting the focus from participant intentions to the research encounter itself, we share reflections on the processes and challenges of ‘knowing’ participants.
Knowing and Trusting Participants
Since research data is generated through interpersonal communication, researchers desire a degree of knowing participants to inform crucial judgements throughout the research process. In the initial stages of research, knowing expected characteristics of those we wish to involve in the study shapes our methodological choices and design approaches. This knowledge informs the literature we consult, how we judge whether to seek potential participants’ involvement, define eligibility criteria in ways recognisable to the participants sought, how we create participant information sheets, what forms of recruitment take place and through which outlets, and types of questions we plan to ask participants.
We need to know who an individual ‘is’ in some sense, or at least the persona they’re presenting, to confirm eligibility. Prior to encountering the rise in ineligible participants, inclusion criteria for qualitative research studies were often used as a form of shorthand identity check with the expectation that the potential participants had some experience in the area being studied. We encourage researchers to enter into more layered interactions that promote knowing, such as pre-interview phone calls, in cases where researchers are uncertain whether a participant meets eligibility criteria.
This relational approach offers an alternative to relying on technical solutions to verifying identity, which can be practically challenging and error prone. For example, verifying an individual’s location does not equate to their trustworthiness, and immigration, emigration, and travel do not necessarily undermine one’s eligibility nor negate their experience of the topic. Procedural changes to gather proof of identity also raise questions around whether researchers have the ‘right’ to objectively ‘know’ an individual’s identity beyond what they have chosen to disclose. Requirements for proof of identity may burden individuals, and knowing someone’s legal name does not necessarily prove they have the lived experience of interest in the study.
Knowing Through Interactions
Although synchronous online research benefits from the immediacy of connection between researchers and participants, it has reduced the ways in which we can know one another through interactions. Discovery of inauthentic participants caused us to question our ways of ‘knowing’ participants in online interactions when we cannot easily assess the full range of relational cues such as body language and comportment. Our commitment to inclusive practices motivated us to determine strategies to avoid excluding genuine participants who faced communication barriers or came from a different social or cultural background that influenced their communication styles.
In our various research projects, we implemented changes that expanded how we could get to know research participants in the absence of in-person interactions. Slowing the process down and increasing the number of synchronous interactions prior to interviews, rather than solely engaging through email, when continuing to feel unsure about who the participant is can increase trust between researcher and participant. This also provides the ability to know participants better over time, however brief, and reduces the likelihood of inauthentic participants investing in the research project.
Setting the tempo to focus on the needs of authentic participants involved undertaking pre-interview conversations to build trust and ensure they fully understood the proposed research and implications for involvement. Some researchers undertook this step over the phone while others avoided phone communication and opted for a video call separate from the interview meeting. This was at times due to an awareness of disparities around access to technology and a desire to avoid imposing additional requirements for participants to access a landline or share mobile phone numbers. While it was important to be respectful of participant time and funding restraints on project resources, ensuring that the conversation over involvement took place synchronously allowed us to build trust and develop a sense of whether potential participants understood the research and implications of involvement before consenting. It also provided opportunity to arrange a mutually convenient time for the interviews. Potential participants we suspected as being ineligible engaged with a sense of urgency around returning consent forms and wishing to schedule interviews within 48 hours. Offering more realistic timescales for busy participant and researcher schedules had the added benefit of discouraging ineligible participants seeking quick access to financial compensation.
During interviews, researchers use the clues from previous interactions with study participants such as their stated motivation for involvement, where their attention is, and the emotional state they’re projecting to assess whether a particular participant appears comfortable taking part or needs a moment to calm their nerves, further discussion on the project, a gentle conversation with small talk, or a way to bow out of the interview before it begins. Through these actions we begin to build trust between researcher and participant, attending to our ethical duties to ensure that we have actively considered and taken steps to mitigate potential harms as a result of research involvement.
Another way of knowing participants during online interviews is through our (admittedly limited) ability to read what body language and facial expressions are visible on camera. In situations where participants entered video calls with their cameras turned off, some researchers asked them to consider turning it on for a few minutes of casual discussion prior to beginning the recording. Researchers explained that having a visual in addition to the audio increased their confidence that they were accurately interpreting what was being said. It can also help determine whether the participant is alone or if someone is coaching them through their responses. Additionally, the illusion of eye contact in online interviews can help build rapport. Sometimes participants who refused or only turned the camera on for a brief glance ended up sharing interview responses that were later identified as fabricated.
If the participants did not share adequate detail or the expected embodied emotion in their responses we considered possible reasons why, such as the impact of trauma on understanding and recall, communication differences, or not trusting the researcher.
Gently asking follow-up questions to clarify details or understand how participants frame the topic, researchers sought to maintain trust and dignity for genuine participants. Avoiding expectations that participants can reflect on their experiences and ‘represent’ themselves (see Alldred & Gillies, 2002, p. 141), this common step for ‘knowing’ genuine participants also at times confirmed suspicions of fabricated accounts. As is well documented across the qualitative research canon, continual attention to reflexivity can help to surface our assumptions and expectations related to our inquiry, including about participants.
Attention to knowing the participants we wished to include rather than exclude required our full presence and reflexivity, drawing on cultivated skills and methods in the relational dynamics of qualitative research. Emphasising researcher behaviour rather than suspicions about participants aimed to retain participant privacy and dignity.
This approach to inauthentic participants does not require technical solutions to ‘protect’ research integrity. It instead builds on human solutions—researcher skills in attending to interpersonal interactions—to ensure that research integrity is based on inclusion. While getting to know genuine research participants will not provide a foolproof means for identifying participants who fabricate their accounts, they often became apparent through these actions. From our experiences, there was a fine line between doubt about the truthfulness of a participant’s account and accounts that fell outside of expected narrative forms. During some interviews, the cumulative experience of pre-interview and interview interactions made it clear that the participants were fabricating their stories. These interviews tended to be short and lacking in detail even when researchers asked follow-up questions or provided an opportunity to share additional thoughts on the experiences. Given the responsibility to maintain safe research interactions for genuine participants, the process of allowing such interviews to come to a natural end was preferable to stopping the interview prematurely. For researchers who are comfortable in their certainty around a participant’s ineligibility, it is crucial to remember the humanity of the ineligible participant who may be taking part out of financial necessity and treat them with dignity in line with any other participant.
Knowing Participants Within the Wider Dataset
Many of us undertook interviews during or after which we doubted certain responses to an extent that made us question whether the participant had fabricated their account. Ultimately, there are many reasons why ‘truthful’ participants may provide information that raises suspicions. Participants have the right to: make mistakes, elude questions, maintain privacy by omitting or changing details, have alternate interpretations, have flawed memories, and not trust the researcher.
The discomfort provoked by verifying data after it was generated revealed the difficulty in knowing whether a story is fabricated – balancing confidence in ‘knowing the field’ (Owens, 2022, p. 125) against potential personal biases (Jones et al., 2021). This often resulted in additional reflection and emotional labour for the researcher to first identify assumptions and biases that may have been at play for them in their interpretation of the account. When details augmented or altered what was known about the topic, we reviewed what we knew about their intersectional identities that may have led to differing experiences. Considering the experiences behind accounts was important to us from a justice perspective (see Garcia-Iglesias et al., 2025). Some fabricated accounts were only noticed because the researchers knew the subject area well and could recognise implausible details. For examples of implausible details, see Owens (2022).
A strategy that helped with this was taking steps to further ‘know’ the data gathered: suspending judgement during transcription and transcript alignment; revisiting the recording and transcript without the pressures of the live interaction, and asking a second project researcher to review the recording and transcript to note any concerns. We then contextualised the transcript alongside others to identify its plausibility rather than inherent truthfulness to make a judgement on whether to include it in the dataset. Garcia-Iglesias et al. (2025) suggest analysing suspected data in a separate category to interrogate researcher judgements about what constitutes ‘authentic’ ‘identities, narratives, findings and methods’ (p. 6).
Discussion
‘Knowing’ participants has always been important to research design, from conceptualisation through dissemination. Throughout the process of discussing these issues with colleagues, and reviewing the literature, we have seen many proposed techniques and approaches to addressing the challenges of ineligible or inauthentic participants. These are too varied, numerous, and context specific to name here, but see McLachlan et al. (2024) for ethical considerations of common proposals. Increased screening practices are an attempt at ‘knowing’ participants that focus on one element of the research process to the detriment of others. A focus on screening out inauthentic participants during recruitment will not remove all those who choose to fabricate their accounts during data collection. As the technology used by ineligible participants develops, checklists or specific techniques may quickly become outdated and insufficient. Singular responses to this emerging issue through mechanisms of exclusion not only interfere with the justice component that underpins many methodological approaches but may also lull researchers into a false sense that continued attention to this issue in the rest of the research encounter is not required. We can use our reflexive skills as researchers, however, to develop responses and react in real time.
We argue that the real tool against ‘fraudulent’ participation is the researcher themselves and the relational dynamic of the research encounter. Research methods that emphasise reflexivity, building rapport, considering the broader socio-political and cultural contexts affecting participants, and attention throughout the research encounter will help ensure continued research quality and inclusion within the context of inauthentic participants. As a global community of qualitative researchers, we are well-placed to respond to this moment. Indeed, this moment has the potential to draw our collective attention to the key aspects of qualitative research that we seek to preserve, cultivate, and promote. It will require continued reflection and public transparency around chosen actions.
Looking ahead: How to Future-proof Responses to Ineligible Participants
As Generative Artificial Intelligence continues to develop, technological responses to ‘knowing’ participants, their eligibility and trustworthiness will become obsolete. The accuracy of Generative Artificial Intelligence particularly large-scale language models will likely continue to contribute to increasingly sophisticated communication used by potential participants planning on fabricating their accounts. In this context, technological solutions will pose a risk to research integrity if they are applied in reactionary ways without considering the methodological principles driving the research and how each action facilitates or undermines research aims. Leaning into researchers’ relational skills, particularly around trust-building (Drysdale et al., 2023) can be an antidote and ensure ethical conduct that maintains the integrity of our inquiry.
The solutions proposed for enhancing research integrity in the context of ineligible participants through knowing create some tensions. Synchronous approaches to online research require a reclamation of research time and resources from physical travel to synchronous communication, which may have implications for funding and project timescales. Some have raised legitimate concerns that the emotional burden of potentially investing time in inauthentic participants may divert resources away from participants with the actual lived experiences sought and by consequence constrain the number of participants who take part (Gibson & Beattie, 2024; O’Donnell et al., 2023; Ridge et al., 2023). These tensions must be balanced with the ethos of inclusion that we seek to cultivate in our research. No single solution will be right for all projects; each response has consequences for participants, researchers, the research study, and for the qualitative research enterprise itself.
Taking time across recruitment and data collection to facilitate authentic research encounters requires investment from researchers, participants, and funders. Getting to know participants and their data through researcher labour, such as additional early conversations or cross-checking interviews prior to analysis, must be written into funding and ethics review board applications. Ethics committees and institutions have a role to play in supporting researchers in response to inauthentic participants (see Gibson & Beattie, 2024; Ridge et al., 2023; Sharma et al., 2024). Having thoroughly considered potential actions across the life cycle of a study, researchers can partner with ethics committees to devise innovative approaches that support knowing. The institutional role must include supporting researchers to further develop their reflexivity and relationship building, including a focus on junior researchers to include additional training to build confidence in their emerging skills within the research dynamic. Inclusivity is not simply a question of access; it is a relationship.
Revisiting Quality in Qualitative Research
Our analysis contributes to resurfacing core questions about what constitutes quality in qualitative research, particularly around the integrity of the research process and findings. Ultimately, we do not believe that these so-called impostor participants pose a threat to the data we collect, analyse, or present, provided that methodological and ethical motivations are thoroughly considered when designing research. We argue that how we seek and come to know participants shapes qualitative research integrity in the context of eligible participants. ‘Impostors’ are not a threat to the integrity of qualitative research; they are a reminder that the quality of online research is based on our skills to engage with the individual on the other side of the computer screen.
Two persistent issues at stake regarding data integrity are: 1. Inauthentic accounts could ‘spoil’ a dataset, requiring stringency in gate-keeping; and 2. That stringency measures could result in excluding the people we most wanted to know in our research. The question then becomes how to identify what elements of quality we value as a qualitative research community. If integrity equates to ‘pristine’ data, then the appropriate response may be ensuring that all data has been collected from individuals whose identities and lived experiences have been verified. If integrity, on the other hand, means applied rigour in service of inclusion, the response is likely to be contributing additional researcher resources to better knowing and considering all potential participants. The approach that we advocate – foregrounding the quality of the relationship – could be a way to ensure both.
Considering ineligible participants and fabricated accounts prompts us to reflect on what we value regarding research quality, scrutinise areas where we may have become too comfortable such as unquestioningly trusting stated eligibility, and turn fresh eyes to the research dynamic, individual and participant identities, and the factors shaping our interactions. It provides opportunities and prompts to deepen our knowledge of participants through research interactions, exploration of biases and the wider context around participant behaviour and communication including the socio-cultural or political constraints and communication barriers they may experience. Qualitative data is generated and co-constructed within a specific context. We must not focus solely on assuming that the incidental inclusion of inauthentic participants damages the data but instead consider how it changes the research at each stage. Building time and consideration into our research design and methods, funding applications, ethical review processes, and from recruitment through dissemination is key to implementing relational strategies to preserve data integrity. Our conclusion is that the impact of inauthentic participants on the research may not be wholly negative, particularly if it prompts researchers to implement additional ways of knowing and building rapport to supplement what has been lost through online research.
Footnotes
Acknowledgements
The authors wish to thank the researchers who generously shared their experiences of responding to ineligible participants, particularly those who contributed to early discussions on ethics: Professor Lindsay Bearne (School of Health & Medical Sciences, City St. George’s, University of London); Dr. Peter Higgs (Department of Public Health, Latrobe University); Professor Janet Hoek (Department of Public Health, University of Otago, Wellington); Dr. Jaqui Lovell (Survivor Researcher Network C.I.C.); Gill Mein (City St. George’s, University of London); Dr. Amy M. Russell (Leeds Institute of Health Sciences, University of Leeds); Rachael Stemp (Anna Freud, UK); Associate Professor Meredith Vanstone (McMaster University); and, Assistant Professor Carly Whitmore (Faculty of Health Sciences, McMaster University). We would also like to thank Dr. Heather Ottaway, Dr. Alex McTier, and Dr. Nadine Fowler (Centre for Excellence for Children’s Care and Protection, University of Strathclyde) for review and comment on the manuscript.
Author Contributions
Conceptualisation of paper, methodological design, and central theme development: B.L.L.D., A.D., and J.A.E. Refinement of themes and initial content drafting: B.L.L.D. Writing, reviewing and editing draft: B.L.L.D., A.D. and J.A.E. All authors undertook critical review, editing and writing of the final manuscript; all authors approved final version.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
