Abstract
The argument against assessment in MOOCs is that traditional means are impossible due to the volume of students. Crowdsourcing provides quality assessment through its diversity and in its patterns of open discourse. Crowdsourced responses help the assessment activity to serve as a knowledge building tool for both the student and the participating community. The authors advance the argument that two indices of learning are a student's ability to integrate learning elements and the subsequent discourse initiated by the student's integration. Based on this premise, student performance in MOOCs can be quantitatively assessed by a means that takes advantage of the volume of students, as opposed to being hindered by it. In the suggested method provided in the article, an instructor generates a collection of learning elements, and students are prompted to generate integration rationales that link these elements and then post the rationales on a discussion board. Data is then collected on the number of integration rationales the student creates, the number of integration rationales that receive a response from other MOOC participants, and the volume of responses individual rationales receive. The authors note that the individual assessor makes the determination of how the data is interpreted and applied.
Get full access to this article
View all access options for this article.
