Abstract
In an Angoff standard setting procedure, judges estimate the probability that a hypothetical randomly selected minimally competent candidate will answer correctly each item in the test. In many cases, these item performance estimates are made twice, with information shared with the panelists between estimates. Especially for long tests, this estimation process can be time-consuming and fatiguing for the judges. This study extended an item selection strategy earlier proposed to form subsets of test items. The results of this study suggest that 40% to 50% of test items may be sufficient to estimate an equivalent passing score in an Angoff setting study when those items are selected to represent the full test in content, discrimination, and difficulty.
Keywords
Get full access to this article
View all access options for this article.
