Abstract
Brief experimental analysis (BEA) is a well-researched approach to conducting problem analysis, where potential interventions are pilot tested using a single-subject alternating treatment design. However, its brevity may lead to a high frequency of decision-making errors, particularly in situations where one tested condition is rarely optimal for students (i.e., the base rate). The current study explored the accuracy of a specific variant of BEA, skill versus performance deficit analysis (SPA), across different variations of the basic BEA design, score difference thresholds, and reading and math curriculum-based measurements (CBMs). Findings indicate that the ABAB design provides a reasonable control of such error rates when using reading CBM, whereas subtraction CBM required the use of an ABABAB design. Such error rates could not be controlled, regardless of design, when using multiplication CBM. Implications for best practice in the use of BEA are discussed.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
