Abstract
At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating probe design procedures is necessary to estimate the contribution of probe-related factors to variance in M-CBM scores. To extend this research, the present study compared a set of commercially available probes (i.e., AIMSweb) with an experimental set that was constructed to enhance consistency in item content across alternative forms. Two probes were randomly selected from each set and administered to 43 students in fifth grade. Generalizability analyses indicated that the experimental probes generated scores with less error variance than those of the AIMSweb probes. Follow-up analyses indicated that fewer administrations of the experimental probes were necessary to attain adequate reliability for relative decisions (i.e., >.80) and high reliability for absolute decisions (i.e., >.90). Results support prior studies and suggest that error variance is associated with M-CBM probe content and development procedures.
Get full access to this article
View all access options for this article.
