Abstract
Variations in readers' content schemata explain why different readers may construct different meanings for a given text and still be in the right. When aiming to reflect this meaning relatively in meaning construction tests, test developers may face theoretical, ethical and practical questions: Are any limits on the free interpretation of a text justifiable? If so, what are these limits? If these limits are set, how can they be reflected in a fair, objective and feasible reading test? Following principles emerging from work by Van Dijk and Kintsch (1983) and Alderson and Short (1981), a meaning consensus criterion answer (MCCA) is suggested as a basis for a relative meaning reading com prehension test. The MCCA is derived from analyses of model answers of a sample of readers from diverse professional backgrounds and levels of expertise. Thus, the MCCA represents both essential, full-consensus compo nents of text meaning, as well as partial, but still considerable, consensus com ponents. It is recommended that the MCCA be used as a basis for item scoring in order to ensure a more feasible objective, yet more relative and, therefore, an unbiased tool for the testing of meaning construction. The paper includes a discussion of the theoretical rationale for the MCCA, and a detailed and an exemplified report on the procedures the MCCA involves. A discussion of MCCA reliability, interconsistency, discrimination power and score meaning follows, and suggestions for future research are made.
Get full access to this article
View all access options for this article.
