Abstract
Crowdsourcing offers fast and cost-effective access to human labor for business projects as well as to research participants for scientific projects. Due to the loose links between crowdsourcing employers and workers, quality control is even more important than in the off-line realm. We developed and validated the web-delivered attention test attentiveWeb in two versions (1) to come up with advance filters to identify workers who produce low-quality results and (2) to gauge the attention of workers who pass the advance filter. We apply attentiveWeb in three parallel user studies: one in the crowdsource Microworkers (N = 539), another one in Figure Eight (N = 333), and a third one in the online panel WiSoPanel (N = 1,837). The user studies confirm that it is useful to apply advance filtering to screen out poor workers. We propose an easily computed filter based on objective user behavior involving attentiveWeb. With regard to attention, despite the more severe advance filtering with Microworkers, their attention was lowest, followed by workers from Figure Eight, and it was highest in WiSoPanel. The platform differences in attention were not entirely explained by known differences—demographic and others—among the users of the three platforms. The attention test attentiveWeb has high Cronbach’s α and split-half reliability. The first version of attentiveWeb predicted performance of the same crowdworkers in the second version of attentiveWeb 2 years later. We release attentiveWeb for assessing crowdworkers’ attention into the research community and the wider public domain. The attention test attentiveWeb is open-source and can be used for free.
Get full access to this article
View all access options for this article.
