Abstract
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants individually wrote a short essay from their concept map. The concept maps and essays were scored by the computer-based tools and by human raters using rubrics. Computer-based concept map scores were a very good measure of the qualitative aspects of the concept maps (r = 0.84) and were an adequate measure of the quantitative aspects (r = 0.65). Also, the computer-based essay scores were an adequate measure of essay content (r = 0.71). If computer-based approaches for scoring concept maps and essays can provide a valid, low-cost, easy to use, and easy to interpret measure of students' content knowledge, then these approaches will likely gain rapid acceptance by teachers at all levels.
Get full access to this article
View all access options for this article.
