Abstract
Artificial intelligence enables people to create convincing online identities and narratives, which poses a growing threat to qualitative health research conducted online. We report on a UK-wide interview study on financial insecurity and serious illness that, during open recruitment across three university sites, received hundreds of false expressions of interest generated or assisted by large language models. Suspected false expressions of interest and screening call notes were retained and anonymised for qualitative content analysis, following Schreier’s (2012) approach. Analysis of emails and screening calls exposed repetitive templated phrasing, vague accounts of serious illnesses, postcode anomalies, and resistance to brief video verification. A layered authentication workflow, postcode checks, UK telephone confirmation, and short introductory calls helped to filter suspected imposters while maintaining accessibility for participants at risk of digital exclusion. This experience highlights the ethical tension between vigilance and trust, and demonstrates the hidden labour and time costs of mitigating imposter participants. By sharing observable red flags, practical screening steps, and their resource implications, we contribute methodological guidance to emerging debates on protecting data integrity in the face of AI-assisted imposter participants. Our case demonstrates that proportionate, manual checks can safeguard authenticity without imposing undue barriers, provided teams remain reflexive about inclusivity and communicate checks clearly in study materials. We outline decision points that researchers and research governance teams can adapt to context, including when to escalate from email screening to telephone or video, how to document anomalies, and how to record exclusions. We argue that journals and funders should recognise verification effort in methods reporting and budgets. The article offers immediate, implementable safeguards for qualitative researchers, and sets priorities for benchmarking detection tools and integrating AI literacy into qualitative methods training.
Keywords
“Artificial intelligence is like fire: a discovery that can cook your food or burn your house down” - Sam Altman, CEO of OpenAI
Introduction
A large language model (LLM) is a type of artificial intelligence (AI) trained on vast amounts of textual data to predict and generate human-like language. Though technically part of machine learning, LLMs are now commonly referred to as “AI” in everyday discourse. Since 2017, artificial intelligence has become embedded in academic and public life across the world, driven by advances in LLMs. The emergence of OpenAI’s GPT-3 (2020) and ChatGPT (2022) marked a turning point in public uptake. By 2023, generative AI was widely and freely accessible, with ChatGPT reaching over 180 million users (Stanford HAI, 2023). This surge has prompted extensive academic debate, particularly concerning research integrity and implications for labour, knowledge production, and ethics (Floridi & Chiriatti, 2020).
LLMs have made AI-generated text easily accessible (Brown et al., 2020). This has facilitated innovation in research methods, for example, through AI-assisted coding and simulated datasets (Brady et al., 2024). However, digital methods also present verification challenges. Online interviews and focus groups, ubiquitous since the COVID-19 pandemic, often lack robust identity checks (Sharma et al., 2024). Participants can now use AI to construct credible backstories and generate interview responses (Stafford et al., 2024). In some cases, individuals use ChatGPT or similar tools in real-time during interviews, especially in text-based formats (Stafford et al., 2024).
Reports of imposter participants in qualitative studies are increasing (Santinele Martino et al., 2024; Sefcik et al., 2023). Unlike survey bots that complete online questionnaires automatically, these individuals may fabricate entire identities and provide plausible interview data. In a UK paediatric study, 385 of 483 expressions of interest (80%) were considered false (O’Donnell et al., 2023). Similar findings have been reported in the US and Australia (Sefcik et al., 2023; Sharma et al., 2024). Motivations range from financial gain when financial incentives to participation are on offer, to opportunistic misrepresentation (e.g., individuals misrepresenting their eligibility simply to participate). In some cases, individuals use AI-generated emails to enrol multiple times in the same study (O’Donnell et al., 2023; Sharma et al., 2024). This is a particular risk in health studies focused on rare conditions or with structurally vulnerable participants, where false narratives may be harder to verify and challenge than in studies with broader or more easily verifiable populations.
Robust qualitative research depends on clarity about what the data are and how they were produced. When data is drawn from suspected imposter participants, the corpus no longer represents accounts grounded in lived experience of the phenomenon under study, which can compromise interpretive claims if treated as such. At the same time, qualitative rigour also lies in reflexively appraising data quality, documenting uncertainty, and being transparent about exclusions and decision making, rather than assuming authenticity (Sefcik et al., 2023). The problem is not that qualitative analysis cannot handle messy or uncertain material, but that unrecognised imposter participation can shift the object of analysis without researchers realising.
Concerningly, there is also an increased risk of harm and implications for privacy when imposters participate in focus groups, especially when genuine participants share sensitive experiences (Santinele Martino et al., 2024). However, there is also a danger that additional scrutiny during the recruitment process may disproportionately exclude those with atypical communication styles or limited digital access (Santinele Martino et al., 2024).
Although this issue is relatively new, efforts have been made to establish guidelines for best practice. Recommended approaches include: multi-stage screening (e.g. verifying IP addresses, conducting pre-interview calls); identifying generic or inconsistent responses; and limiting public mention of incentives (O’Donnell et al., 2023; Sharma et al., 2024). While AI detection tools have been proposed, they remain unreliable (Stafford et al., 2024). Current literature on this topic also highlights the role of reflexivity and avoiding assuming all anomalies are false (Santinele Martino et al., 2024).
The terminology used to describe this phenomenon is still under debate. While some literature refers to “imposter participants,” others argue that this phrasing may carry derogatory connotations and instead advocate for more neutral terms such as “ineligible participants” (Heaphy et al., 2025). This reflects broader ethical concerns around language, stigma, and researcher assumptions. Recent workshops and position papers have begun to shape emerging guidance, with “ineligible participants” likely to feature in future recommendations. However, for clarity and consistency with existing literature, we use the term “suspected imposter participant” throughout this paper to refer to individuals who use AI-assisted identities to gain access to qualitative research studies.
We present a case study of a qualitative interview study in which suspected imposter participants were identified during the recruitment process. The study, ‘Unreached - the impact of financial insecurity and socioeconomic deprivation in rural and urban areas’, began in 2024; a collaboration between the University of Southampton, Liverpool John Moores University and the University of Glasgow. This UK-wide study explored how financial insecurity and socioeconomic deprivation affect access to care and support for people living with serious advanced illness and their family carers.
Methods
In-depth semi-structured interviews were conducted with individuals living with serious advanced illnesses and their carers, including bereaved carers. Participants (target n=60, 20 per site) were recruited through national support services, third-sector organisations, and social media platforms across the UK. The interview guide explored participants’ lived experiences, focusing on financial and practical challenges while also highlighting community assets and sources of local support.
Each participating institution covered a different area of the UK: the University of Southampton led work in the South of England, Liverpool John Moores University covered North England and Wales, and the University of Glasgow focused on Scottish rural, coastal, and island locations. Each site had senior academics acting as Principal Investigators (PI), and a Research Fellow (RF) responsible for recruitment and data collection. The full team met every two weeks to share updates and discuss any arising issues.
Interviews were offered face-to-face, telephone, or video call, depending on participant preference. Research Fellows posted recruitment adverts in local carers’ groups and condition-specific forums on X (formerly Twitter) and Facebook. A typical example of the wording used in posts is provided below (Figure 1): An example of the recruitment notice used on social media.
Suspected imposter participants began to appear in June 2024, initially within the Scottish recruitment site. This was followed shortly afterwards by similar cases at Liverpool John Moores University and then the University of Southampton. This was an unprecedented experience for most research team members and suspected false contacts in the Scottish sample were retained and anonymised (although we believe that many of the names and details provided were themselves bogus). These anonymised emails were collated for possible inclusion in a future publication examining the phenomenon.
The emails were analysed using qualitative content analysis to identify recurring features in syntax, tone, sentence structure, and phrase repetition. Qualitative content analysis enables the systematic classification of textual data through coding and theme identification (Bengtsson, 2016; Schreier, 2012). The data analysis focused on identifying shared patterns in formatting, email structure, vocabulary, and engagement style.
We recognise that no single feature, including “generic” email addresses, polished grammar, or inconsistent accounts, can confirm inauthentic participation. Many genuine participants may write briefly, use non idiomatic English, or use tools such as ChatGPT to improve clarity. We therefore treated potential “red flags” only as prompts for proportionate follow up. A contact was classified as suspected ineligible or inauthentic only when multiple indicators co-occurred and the individual could not complete minimal verification steps required to confirm eligibility and safe participation, for example providing a plausible full postcode consistent with the study geography, supplying a UK telephone number, and answering straightforward context based questions in a short introductory call (for example about diagnosis, local area, or local services). This approach prioritised conservative decision making, recognising the possibility of false positives, and aimed to avoid excluding genuine participants on the basis of communication style alone.
Analysis of communications from suspected imposter participants was conducted with careful consideration of ethical and legal frameworks. Under the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, processing personal data for research purposes is permissible when it serves the public interest and appropriate safeguards are in place. However, in this context, the communications in question lacked verifiable personal data, as the identities and details provided were falsified. As such, the information was anonymised, ensuring that no identifiable data was retained, thereby aligning with the principles of data minimisation and purpose limitation as outlined in Article 5 of the UK GDPR. The decision to analyse these communications was driven by the necessity to protect the integrity of the research process and to develop strategies to mitigate similar occurrences in future studies.
Findings
Characteristic Responses of Suspected Imposter Participants
Across the three recruitment sites, approximately 400 emails received were identified as likely to be false based on duplicated advert phrasing, inconsistent or implausible postcodes, generic Gmail addresses, and failure of the ‘participant’ to verify their identity by a UK phone or brief video call. Through the analysis, several consistent patterns emerged. Most messages were sent from Gmail accounts that paired ordinary names with seemingly random numbers or unusual combinations e.g. “
The body of each email was usually confined to three short paragraphs that mirrored wording from our advertising copy. Phrases such as “serious advanced illness,” “financial struggles,” and “share my story” recurred almost verbatim. While the tone of emails contained relevant keywords, they lacked specific details (such as the diagnosis of the proposed participant) and had perfect spelling and grammar. Although Americanised “financial assistance programs” (see quote below), may not be indicative of an imposter in itself, when set in the context of other red flags suggest it was not a genuine response. “I’m reaching out to express my interest in participating in your research study on the financial struggles faced by individuals living with advanced illnesses. As a caregiver for my mother, who has been battling a serious illness for the past few years, I’ve seen firsthand the emotional and financial toll it can take. Despite my best efforts to support her, I’ve struggled to navigate the complex healthcare system and financial assistance programs. I’ve had to take time off work to care for her, leading to lost income and added stress. But I’ve also learned valuable lessons about resilience, advocacy, and the importance of support networks. I believe that sharing our story can help shed light on the challenges faced by caregivers and patients alike and inform more effective support systems. I'm available for an interview at your convenience and look forward to contributing to your study.”
Messages rarely included concrete details of illness trajectory, treatment, or local services. When prompted, many contacts supplied partial postcodes or generic urban locations, e.g. “AB10 1AB”, which were inconsistent with the rural focus of the study. However, as time progressed, the plausibility of the provided postcodes improved, with some locations being in areas of rural Scotland. However, when checked, these postcodes sometimes corresponded to business premises or offices.
In a minority of cases, online video screening was necessary. This was conducted via video call, where possible, although some individuals insisted on using the telephone. In all cases, such a request was respected, as we were aware of the risk of disenfranchising participants experiencing digital exclusion or who had poor internet access due to their rural location. Live screening reinforced the findings from the email analysis. During preliminary video calls, most individuals declined to turn on their video and offered only short answers, or took a significant pause before answering, during which time we suspected they may have been searching for answers online.
Linguistically, the emails included hallmarks of large language-model assistance. Repetitive clauses i.e. “navigate the complex healthcare system”, and “resilience, advocacy, and the importance of support networks” - suggested template reuse rather than a live narrative. Sentences were uniformly well-formed yet lacked idiomatic flow and natural wording. In some cases, errors were repeated, such as referring to the interviews as “focus groups”:
The frequency with which emails arrived or were responded to could also provide an additional indication of AI. In one instance, eight emails expressing interest, all with slightly different phrasing, arrived within three minutes. On other occasions, replies to follow-up questions arrived within minutes of being sent, a turnaround difficult to align with manual composition yet consistent with automated output. There was a notable progression from brief generic expressions of interest to more elaborate, polished emails. This suggests that the individual(s) submitting these messages refined their prompting strategies over time as they continued to engage with the research team.
Some responses became hostile or dismissive when asked to verify their identity, reinforcing concerns that financial incentives may be the primary motivation for false participation. One such reply read: “Just send me the voucher!”
This abrupt shift in tone suggests that, for some individuals, the primary goal was accessing the incentive rather than contributing to the study. Such responses underscore the ethical and practical tension between making participation accessible and protecting research integrity.
Adapting the Recruitment Process
In light of these patterns, the research team across the three sites continued to meet and share their experiences and suggestions for filtering suspected imposter participants. Typically, incoming messages were screened for email addresses containing unusual name or number pairings, explicit replication of advert phrasing, and absence of location-specific or illness-specific details.
The presence of one or more of the identified “red flags” was not treated as conclusive of an imposter participant, particularly in a study involving serious illness and financial hardship. They were used as prompts for proportionate follow-up alongside other indicators. Whether or not AI use was suspected, all leads were followed up in full. Contacts were asked to confirm their postcode and provide a UK phone number. In most cases, participants either supplied urban city centre postcodes or failed to provide a phone number. Where telephone or video screening proceeded, researchers held an ‘introductory meeting’ where some simple questions were asked, such as about the person’s diagnosis, local area, and available resources. All questions that authentic participants could answer easily; suspected imposter participants tended to offer vague generalities. Manual screening, rather than automated detection, proved an effective safeguard against imposter participants using AI. However, there was a significant time commitment involved in following up on all potential leads. Across the three recruitment sites approximately 400 false emails were received.
Discussion
Practical Safeguards
Recent literature has increasingly highlighted the rapid emergence of “imposter participants” - Individuals employing false identities or AI-generated content to participate in research studies (Ridge et al., 2023; Sharma et al., 2024). Given a lack of guidance on this issue, researchers have responded by developing methodological adaptations aimed at identifying and managing this phenomenon. One study documented an instance wherein an entire cohort of participants in an online focus group study was identified as false, emphasising key indicators or “red flags,” including improbable or duplicated narratives, vague or inconsistent demographic details, overly concise or non-specific responses, fixation on incentives, and participants’ reluctance to engage via video communication (Sharma et al., 2024). These findings align with our analysis, which similarly uncovered hesitancy toward video interactions, minimalistic responses, and pauses suggestive of online searching during initial screening calls. Sharma et al. (2024) employed stringent screening methods, notably verifying participants’ geographical locations and identities through external markers. Our experience supported the view that requests for postcode verification and UK-based phone numbers can provide an effective means of filtering suspected imposter participants.
Additional measures used in qualitative research include pre-interview video screening or initial camera verification, scrutiny of email address patterns, and analysis of response timings and content for recurring scripted narratives (Ridge et al., 2023). Furthermore, metadata to confirm information, like postcodes, matches the geographic area participants claim to reside in (Medero et al., 2025). Consistent with these observations, our research highlighted distinct email characteristics, specifically the repetitive use of commonplace names coupled with numeric combinations and verbatim adoption of recruitment advert phrases. Moreover, our findings indicated that rapid response intervals and simultaneous receipt of multiple similar emails provided further evidence of potential AI-driven participation.
These protective measures form part of what has been termed an “imposter protocol,” highlighting ongoing adaptations in qualitative methods that enhance rigour without diminishing participant inclusivity (Ridge et al., 2023). However, there is a need to balance vigilance against fraud with an ethical commitment to inclusivity. In one study confronting false interviews within a gender de-transition project, fabricated data were excluded, but the authors cautioned against prematurely dismissing atypical yet genuine narratives (Pullen Sansfaçon et al., 2024). Others have highlighted the ethical tensions involved, arguing that favouring oral communication risks inadvertently excluding individuals reliant on text-based participation (Stafford et al., 2024). Reflecting similar ethical considerations, our research team consciously accommodated telephone interviews, mindful of digital exclusion in rural contexts and participant preference, despite the increased risk of imposter participants. This tension reinforces the importance of manual appraisal of each potential participant; despite the significant time cost this can incur. It also highlights the need for funders to allow for verification costs as part of grant applications, and for researchers to be aware of how to identify potential imposters.
Several indicators discussed in the emerging literature as “red flags” are ethically ambiguous in the context of serious illness and financial hardship. In our study, most genuine participants preferred telephone contact, which may reflect fatigue, fluctuating symptoms, caring responsibilities, limited connectivity, or privacy concerns. We therefore avoided treating telephone preference, brief messages, or reluctance to use video as suspicious in themselves. Similarly, interest in vouchers is an understandable response to incentivised recruitment for people struggling financially and should not be moralised. Nonetheless, our experience suggests that the patterning of discussions regarding the financial incentive may still be informative: many participants who were verified as genuine did not foreground the incentive in initial contact (and some volunteered to decline or donate it), whereas suspected imposters more often fixated on the voucher or became hostile when asked to complete minimal eligibility checks. We interpret this as a practical signal within a wider set of indicators rather than a standalone marker of false participation.
Our use of the term “imposter participant” is a pragmatic shorthand for contacts who could not be verified as eligible and whose communication patterns raised concerns about authenticity. We cannot have one hundred percent certainty that every excluded contact was false, and we acknowledge that some may have been genuine individuals using AI or communicating in a non-normative style. The benefit of making our screening logic explicit is improved transparency about how the dataset was protected, the labour involved, and the ethical reasoning used to balance access with integrity and participant safety. The cost is the risk of stigmatising atypical but genuine participants, and the risk of overstating what can be inferred about intent. To mitigate this, we treat indicators as context dependent, avoid moralising interest in incentives, and recommend that teams document uncertainty, use escalation steps rather than binary judgements, and communicate checks clearly as a routine part of ethical recruitment rather than suspicion directed at individuals.
Ethical Implications of AI Participants
The rise of imposter participants in qualitative health and social care research also provokes broader ethical debate. One major concern is the erosion of trust in the researcher/participant relationship. Traditionally, qualitative inquiry relies on good faith and rapport, but investigators now face the unsettling task of questioning whether a participant is “real” or telling the truth (Hoskins et al., 2025). Interviewers can feel uneasy or even guilty about doubting a participant’s authenticity, which speaks to a broader ethical tension: researchers must protect data validity and other participants, yet risk fostering an atmosphere of suspicion (Ridge et al., 2023). If genuine participants sense excessive scepticism or intrusive vetting, they may feel disrespected or stigmatised. Over-zealous screening could inadvertently exclude or alienate individuals from marginalised groups, the very voices qualitative research often aims to amplify, if their profiles or communication styles fail to fit an expected norm (Santinele Martino et al., 2024).
Heaphy et al. (2025) suggest reframing “imposter” issues through a reflexive epistemology; rather than treating suspect data purely as a contaminant, researchers might examine what such fabrications reveal about participants’ contexts, incentives, or identity play. This perspective challenges a clear-cut view of truthfulness and urges caution in labelling someone a fraud. However, many health and social care researchers adopt the stance that deliberately fabricated identities or AI-crafted narratives violate the informed consent and honesty that ethical research requires (Drysdale et al., 2023; Santinele Martino et al., 2024).
There is also the matter of privacy and confidentiality. To verify identities, researchers might need to collect extra personal information (IDs for example), which must be handled with care to avoid new privacy breaches. The rise in AI participants is then pushing qualitative researchers to reconsider how we uphold core ethical principles: trust, participant welfare, consent, and privacy in an environment where the authenticity of participation can no longer be assumed. Positions span from a protective stance (ensuring no “fake” data corrupts findings or harms participants) to a more interpretive stance, treating the phenomenon itself as worthy of sociological analysis (Heaphy et al., 2025; Ridge et al., 2023).
Policy and Institutional Responses
Institutional and research governance responses to AI-assisted suspected imposter participants are emergening, but some movement is evident across ethics review, publishing, and funding. Research Ethics Committees are starting to scrutinise online qualitative protocols in the same way they have long inspected data-security plans: some applicants are now expected to justify their identity-verification steps, specify what metadata (e.g. IP logs) will be retained, and explain how suspicious cases will be documented and reported (Drysdale et al., 2023; Heaphy et al., 2025; Ridge et al., 2023). Journals have followed suit. Editorials in Health Sociology Review, Qualitative Health Research, and Sociological Research Online over 2023-2024 advise authors to describe their “imposter protocol” when submitting manuscripts, mirroring the attention-check norm in survey work (Drysdale et al., 2023; Santinele Martino et al., 2024). Funding agencies, alerted by recent incidents in Canada and Australia, are beginning to hint that risk-mitigation plans will soon form part of grant-assessment criteria (Sefcik et al., 2023; Sharma et al., 2024). Professional bodies are also producing interim guidance: the British Psychological Society’s 2024 short guide “How we learnt to battle the bots” recommends email-pattern screening, duplicate-IP checks and live video confirmation, while COPE devoted a 2024 seminar series to the issue (BPS, 2024).
Our own experience shows how institutions are embedding expectations into a patchwork of existing frameworks rather than issuing a stand-alone policy. Under Generative AI guidance for researchers, any AI use during data collection or analysis must be declared in the ethics application and reviewed for data-protection compliance. University-level social science rules further require researchers to explain their screening logic: CAPTCHA tests, postcode and UK-phone confirmation, duplicate-IP filtering, and to state how suspect cases will be handled and later de-briefed. Finally, the research ethics committee expects every non-clinical study to outline safeguards for data quality. However, until top-down guidelines are finalised, the field is essentially self-regulating through case reports, rapid-response guidance, and conference workshops (Heaphy et al., 2025; Santinele Martino et al., 2024). These grassroots practices may crystallise into formal standards within the next few years, making filtering suspected imposter participants as routine a requirement as data-protection compliance.
Limitations and Future Research
Our analysis is confined to one UK-wide project in a specific health context, so patterns may differ in other disciplines or jurisdictions. Manual screening inevitably risks both false negatives (skilled imposters) and false positives (genuine but atypical participants); future work should triangulate manual checks with validated algorithmic tools. Because we relied on email records and brief screening calls, we could not fully explore suspected imposter participants’ motivations. Longitudinal, multi-site studies, ideally combining qualitative follow-up with technical detection metrics are needed to gauge how AI-assisted impersonation evolves and to test scalable, privacy-respecting counter-measures.
While existing studies have identified the phenomenon of AI-assisted imposter participants, our findings extend this knowledge by providing a detailed account of how layered manual screening can mitigate the risk without disproportionately excluding digitally marginalised groups. We also highlight that motivations for false participation can be primarily financial rather than ideological, with individuals exploiting incentives intended to make research participation more accessible for structurally marginalised populations. Offering incentives remains ethically important to reduce barriers for underserved groups; however, our experience suggests that future recruitment strategies must balance these incentives with light-touch identity verification to protect data integrity.
Recommendations
The capability of LLMs to generate human-like dialogue has increased markedly over the past five years. Ongoing competition between major technology firms is likely to drive further advances, making the tools even more accessible and sophisticated. As a result, the issue of AI-assisted imposter participants is likely to grow. There is, therefore, a pressing need for qualitative researchers to share examples from practice, refine techniques for identifying and managing suspected imposter participants, and interrogate current guidance to ensure it remains fit for purpose. Based upon the findings of our analysis, and a wider review of current evidence in this area, we make the following recommendations for policymakers, researchers and research institutions.
Policy
Update national and professional ethics codes to demand a “participant-authenticity plan” in every online qualitative submission (Drysdale et al., 2023; Ridge et al., 2023).
Require journals to publish an authentication statement alongside the standard ethics declaration, including an estimate of the number of suspected imposter participants, to benchmark this phenomenon (Santinele Martino et al., 2024).
Ring-fence verification costs in grant budgets and score proposals on the strength of measures to mitigate suspected imposter participants (Sefcik et al., 2023; Sharma et al., 2024).
Practice
Adopt layered checks: postcode and phone confirmation, duplicate-IP screening, brief video/telephone introductions, and an anomaly log, our protocol blocked many suspected imposters and did not breach GDPR.
Briefly describe eligibility checks (e.g., postcode and UK-phone confirmation) in the participant information sheet, stressing that they are quick, safeguard data quality, and optional alternatives are available where needed (Garcia-Iglesias et al., 2025).
Build extra time and funds into projects for screening labour or paid ID-verification services.
Future Research
Benchmark detection tools, emerging guidelines and publish accurate data on open datasets (Gibson & Beattie, 2024; Stafford et al., 2024).
Map the motives, platforms, and regional patterns of suspected imposter participants via interdisciplinary digital-sociology studies (Heaphy et al., 2025).
Embed imposter participant detection and ethics modules into qualitative-methods training for the next generation of researchers and students.
Conclusion
AI-assisted imposter participants, driven by financial incentives, widespread access to generative AI, and limited online identity checks, are likely to increase as governments, major technology firms, and research universities intensify efforts to scale LLMs, enhance their capabilities, and accelerate commercial deployment. This represents a new recruitment landscape in which qualitative researchers must quickly develop new skills of detection and scrutiny. There is a need for awareness-raising and early discussion within research groups, as well as the development of mitigation protocols and the sharing of experiences within the research community. In light of our experiences with suspected imposter participants discussed in this article, we call for qualitative researchers to ensure AI authenticity checks are implemented alongside traditional ethical safeguards. Our experience shows that simple, layered screening offers an immediate, proportionate response. However, future work is needed to benchmark detection tools and embed AI literacy in research methods training to keep pace with rapidly evolving generative technologies.
Footnotes
Acknowledgements
Thank you to all recruitment partners who supported the study.
ORCID iDs
Ethical Considerations
Liverpool John Moores University Research Ethics Committee. Local governance approvals were obtained at each participating site.
Consent to Participate
No personal data was used in this project, and UK GDPR guidance was always followed.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Marie Curie grant MC-22-504.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
Data are available from the corresponding author upon reasonable request.
