Abstract
The Common European Framework of Reference (CEFR) posits six levels of proficiency and defines these largely in relation to empirically derived difficulty estimates based on stakeholder perceptions of what language functions expressed by ‘Can-do’ statements can be successfully performed at each level. Though also containing much valuable information on language proficiency and advice for practitioners, in its present form the CEFR is not sufficiently comprehensive, coherent or transparent for uncritical use in language testing. First, the descriptor scales take insufficient account of how variation in terms of contextual parameters may affect performances by raising or lowering the actual difficulty level of carrying out the target ‘Can-do’ statement. In addition, a test’s theory-based validity - a function of the processing involved in carrying out these ‘Can-do’ statements - must also be addressed by any specification on which a test is based. Failure to explicate such context and theory-based validity parameters - i.e., to comprehensively define the construct to be tested - vitiates current attempts to use the CEFR as the basis for developing comparable test forms within and across languages and levels, and hampers attempts to link separate assessments, particularly through social moderation.
Get full access to this article
View all access options for this article.
