Abstract
The increasing use of crowdsourcing platforms for behavioural research rests on the assumption that research participants are exclusively human. This assumption is now under threat. AI agents from browsers such as OpenAI’s Atlas and Perplexity’s Comet can autonomously complete online surveys. These agents can simulate specific personas or demographic profiles and follow survey prompts, select responses and submit data with fluency and internal consistency. Such capabilities threaten data authenticity and integrity, especially as subjective perception, motivation and emotion are central in behavioural research. This research note outlines practical mitigation strategies to detect AI responses. In addition to immediate measures, the emergence of AI-generated survey data requires broader methodological reflection, updated ethical guidelines and transparent reporting practices. We also situate these risks within the emerging literature on synthetic data, distinguishing unauthorised AI-generated responses from the transparent and theory-driven use of synthetic data for research purposes. Finally, we offer a forward-looking research agenda for protecting human data while responsibly engaging with synthetic data in marketing research. Instead of treating AI solely as a threat, researchers can use this as an opportunity to strengthen methodological rigour and protect the authenticity of human data in an increasingly automated research environment.
Get full access to this article
View all access options for this article.
