Abstract
Evaluation of music performance in competitive contexts often produces discrepancies between the expert judges. These discrepancies can be reduced by using appropriate rubrics that minimise the differences between judges. The objective of this study was the design and validation of an analytical evaluation rubric, which would allow the most objective evaluation possible of a musical solo performance in a regulated official competition. A panel of three experts created an analytical rubric made up of five review criteria and three scoring levels, together with their respective indicators. To validate the rubric, two independent panels of judges used it to score a sample of recordings. An examination was made of the dimensionality, sources of error, inter-rater reliability and internal consistency of the scores coming from the experts. The essential unidimensionality of the rubric was confirmed. No differential effects between raters were found, nor were significant differences seen in each rater’s internal consistency. The use of a rubric as tool for evaluating music performance in a competitive context has positive effects, improving reliability and objectivity of the results, both in terms of intra-rater consistency and agreement between raters.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
