Abstract
Abstract
Web surveys have become the dominant mode of survey data collection. They offer advantages in terms of cost and time over other modes and they are the new normal in survey research for many populations and in many fields of study. Nevertheless, web surveys, like other survey modes, are affected by the consequences of satisficing behavior, which is attributed, among other things, to low motivation among respondents. We assume that the motivation of respondents can be increased by using a respondent-friendly approach in the design of the questionnaire ( Dillman, 2000), which was achieved by implementing a chatbot-like questionnaire interface. In a randomized field-experiment conducted among university students we employed a between-subject design comparing a chatbot-like interface and a traditional web survey design administering the same questionnaire. We assessed respondent evaluation, as well as data quality indicators like response time, non-differentiation, item missing rates, and the length of answers to narrative open-ended questions. Results indicate that respondents perceived the chatbot-like design as more original and entertaining than the traditional web survey design. By contrast, participants rated the chatbot-like interface as more difficult to navigate. The analyses of response time and character count of answers to open-ended questions showed no significant differences between the two designs. The proportion of respondents with at least one item-missing was marginally lower in the chatbot-like design, while the degree of differentiation of one of the multi-item scales was higher in the web survey design.
Get full access to this article
View all access options for this article.
