Abstract
Library professionals and library researchers use surveys for data collection from human subjects, involving a research methods skill set. However, creating quality surveys can be challenging, involving close attention to design and revision through validation processes. When survey design errors are not detected prior to data collection, the survey data is unreliable. Generative artificial intelligence tools could potentially assist library professionals and researchers with survey design error detection to enhance the quality of the data collected. This research tested the performance of five generative artificial intelligence tools in detecting two common survey question errors: double-barrelled and acronym-dependent questions. While the generative artificial intelligence tools were typically able to detect acronym-dependent questions, they underperformed in the detection of double-barrelled questions. Even the subsequent provision of explicit training on double-barrelled questions did not lead to full accuracy in detection across the generative artificial intelligence tools. Generative artificial intelligence tools cannot be relied on for this aspect of survey quality control at this stage of their evolution.
Keywords
Get full access to this article
View all access options for this article.
