Abstract
The quality of sign language interpreting (SLI) is a gripping construct among practitioners, educators and researchers, calling for reliable and valid assessment. There has been a diverse array of methods in the extant literature to measure SLI quality, ranging from traditional error analysis to recent rubric scoring. In this study, we want to expand the terrain of SLI assessment, by exploring and evaluating a novel method, known as comparative judgment (CJ), to assess SLI quality. Briefly, CJ involves judges to compare two like objects/items and make a decision by choosing the one with higher quality. The binary outcomes from repeated comparisons by a group of judges are then modelled statistically to produce standardized estimates of perceived quality for each object/item. We recruited 12 expert judges to operationalize CJ via a computerized system to assess the quality of Chinese Sign Language interpreting produced by 36 trainee interpreters. Overall, our analysis of quantitative and qualitative data provided preliminary evidential support for the validity and utility of CJ in SLI assessment. We discussed these results in relation to previous SLI literature, and suggested future research to cast light on CJ’s usefulness in applied assessment contexts.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
