Abstract
Evaluators can work with brief units of text-based data, such as open-ended survey responses, text messages, and social media postings. Online crowdsourcing is a promising method for quantifying large amounts of text-based data by engaging hundreds of people to categorize the data. To further develop and test this method, individuals were recruited through online crowdsourcing to code open-ended survey responses, using a predetermined list of thematic codes that were derived from the responses. The study compared the coding results obtained from online crowdsourcing with coding results obtained from researcher coders. Additionally, the study examined feedback from the crowdsourced coders about their experiences with the task. The results suggested that online crowdsourcing can produce comparable results to researcher coding, but that the comparability of the results may differ across codes. This method may increase the efficiency of quantifying text-based data and provide evaluators with valuable feedback on their coding schemes.
Get full access to this article
View all access options for this article.
