Abstract
Teachers’ knowledge and skills about data-based instruction (DBI) can influence their self-efficacy and their implementation of DBI with fidelity, ultimately playing a crucial role in improving student outcomes. The purpose of this brief report is to provide evidence for the technical adequacy of a measure of DBI knowledge and skills in writing by examining its internal consistency reliability, considering different factor structures, and assessing item statistics using classical test theory and item response theory. We used responses from 154 elementary school teachers, primarily special educators, working with children with intensive early writing needs. Results from confirmatory factor analysis did not strongly favor either a one-factor solution, representing a single dimension of DBI knowledge and skills, or a two-factor solution, comprising knowledge and skills subscales. Internal consistency reliability coefficients were within an acceptable range, especially with the one-factor solution assumed. Item difficulty and discrimination estimates varied across items, suggesting the need to further investigate certain items. We discuss the potential of using the DBI Knowledge and Skills Assessment, specifically in the context of measuring teacher-level DBI outcomes in writing.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
