Abstract
Two kinds of scoring templates were empirically derived from summaries written by experts and students to evaluate the quality of summaries written by the students. This paper reports students' attitudes towards the use of the two templates and its differential statistical effects on the judgment of students' summarization performance. It was found that a summary consistently received higher scores when judged by the popular template, regardless of the language and the language order (English then Chinese or Chinese then English) in which a summary was produced. However, the scores from the expert template were slightly better predictors of students' reading abilities as measured by FCE and TOEFL. The majority of the students strongly preferred the expert template. Their arguments centered mainly on the differences between students and experts in their language abilities and experience, their stereotypical status in educational assessment, and the dialectical interpretations of `quantity' and `quality'. Implications of these findings are discussed with specific reference to the value of involving test-takers in assessment criteria development.
Get full access to this article
View all access options for this article.
