Abstract
The demand for diagnostic feedback has triggered extensive research on cognitive diagnostic models (CDMs), such as the deterministic input, noisy output “and” gate (DINA) model. This study explored two Q-matrix specifications with the DINA model in a statewide large-scale mathematics assessment. The first Q-matrix was developed based on five predefined content reporting categories, and the second was based on the post hoc coding of 15 attributes by test-development experts. Total raw scores correlated strongly with the number of skills mastered, using both Q-matrices. Correlations between the DINA-model item statistics and those from the item response theory analyses were moderate to strong, but were always lower for the 15-skill model. Results highlighted the trade-off between finer-grained modeling and less precise model estimation.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
