Examining DIF in the Context of CDMs When the Q-Matrix Is Misspecified

The rise in popularity and use of cognitive diagnostic models (CDMs) in educational research are partly motivated by the models’ ability to provide diagnostic information regarding students’ strengths and weaknesses in a variety of content areas. An important step to ensure appropriate interpretatio...

Full description

Bibliographic Details
Main Authors: Dubravka Svetina, Yanan Feng, Justin Paulsen, Montserrat Valdivia, Arturo Valdivia, Shenghai Dai
Format: Article
Language:English
Published: Frontiers Media S.A. 2018-05-01
Series:Frontiers in Psychology
Subjects:
Online Access:http://journal.frontiersin.org/article/10.3389/fpsyg.2018.00696/full
Description
Summary:The rise in popularity and use of cognitive diagnostic models (CDMs) in educational research are partly motivated by the models’ ability to provide diagnostic information regarding students’ strengths and weaknesses in a variety of content areas. An important step to ensure appropriate interpretations from CDMs is to investigate differential item functioning (DIF). To this end, the current simulation study examined the performance of three methods to detect DIF in CDMs, with particular emphasis on the impact of Q-matrix misspecification on methods’ performance. Results illustrated that logistic regression and Mantel–Haenszel had better control of Type I error than the Wald test; however, high power rates were found using logistic regression and Wald methods, only. In addition to the tradeoff between Type I error control and acceptable power, our results suggested that Q-matrix complexity and item structures yield different results for different methods, presenting a more complex picture of the methods’ performance. Finally, implications and future directions are discussed.
ISSN:1664-1078