Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test
Reading subskills are generally regarded as continuous variables, while most models used in the previous reading diagnoses have the hypothesis that the latent variables are dichotomous. Considering that the multidimensional item response theory (MIRT) model has continuous latent variables and can be...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-08-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fpsyg.2021.644764/full |
id |
doaj-ce6f8b8a7d144d6e99ef2661a20073d4 |
---|---|
record_format |
Article |
spelling |
doaj-ce6f8b8a7d144d6e99ef2661a20073d42021-08-13T05:16:50ZengFrontiers Media S.A.Frontiers in Psychology1664-10782021-08-011210.3389/fpsyg.2021.644764644764Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension TestHui Liu0Hui Liu1Yufang Bian2Faculty of Linguistic Sciences, Beijing Language and Culture University, Beijing, ChinaCollaborative Innovation Center of Assessment for Basic Education Quality, Beijing Normal University, Beijing, ChinaCollaborative Innovation Center of Assessment for Basic Education Quality, Beijing Normal University, Beijing, ChinaReading subskills are generally regarded as continuous variables, while most models used in the previous reading diagnoses have the hypothesis that the latent variables are dichotomous. Considering that the multidimensional item response theory (MIRT) model has continuous latent variables and can be used for diagnostic purposes, this study compared the performances of MIRT with two representatives of traditionally widely used models in reading diagnoses [reduced reparametrized unified model (R-RUM) and generalized deterministic, noisy, and gate (G-DINA)]. The comparison was carried out with both empirical and simulated data. First, model-data fit indices were used to evaluate whether MIRT was more appropriate than R-RUM and G-DINA with real data. Then, with the simulated data, relations between the estimated scores from MIRT, R-RUM, and G-DINA and the true scores were compared to examine whether the true abilities were well-represented, correct classification rates under different research conditions for MIRT, R-RUM, and G-DINA were calculated to examine the person parameter recovery, and the frequency distributions of subskill mastery probability were also compared to show the deviation of the estimated subskill mastery probabilities from the true values in the general value distribution. The MIRT obtained better model-data fit, gained estimated scores being a more reasonable representation for the true abilities, had an advantage on correct classification rates, and showed less deviation from the true values in frequency distributions of subskill mastery probabilities, which means it can produce more accurate diagnostic information about the reading abilities of the test-takers. Considering that more accurate diagnostic information has greater guiding value for the remedial teaching and learning, and in reading diagnoses, the score interpretation will be more reasonable with the MIRT model, this study recommended MIRT as a new methodology for future reading diagnostic analyses.https://www.frontiersin.org/articles/10.3389/fpsyg.2021.644764/fullcontinuous variablediagnostic studymultidimensional item response theorymodel selectionreading comprehension test |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Hui Liu Hui Liu Yufang Bian |
spellingShingle |
Hui Liu Hui Liu Yufang Bian Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test Frontiers in Psychology continuous variable diagnostic study multidimensional item response theory model selection reading comprehension test |
author_facet |
Hui Liu Hui Liu Yufang Bian |
author_sort |
Hui Liu |
title |
Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_short |
Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_full |
Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_fullStr |
Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_full_unstemmed |
Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_sort |
model selection for cogitative diagnostic analysis of the reading comprehension test |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Psychology |
issn |
1664-1078 |
publishDate |
2021-08-01 |
description |
Reading subskills are generally regarded as continuous variables, while most models used in the previous reading diagnoses have the hypothesis that the latent variables are dichotomous. Considering that the multidimensional item response theory (MIRT) model has continuous latent variables and can be used for diagnostic purposes, this study compared the performances of MIRT with two representatives of traditionally widely used models in reading diagnoses [reduced reparametrized unified model (R-RUM) and generalized deterministic, noisy, and gate (G-DINA)]. The comparison was carried out with both empirical and simulated data. First, model-data fit indices were used to evaluate whether MIRT was more appropriate than R-RUM and G-DINA with real data. Then, with the simulated data, relations between the estimated scores from MIRT, R-RUM, and G-DINA and the true scores were compared to examine whether the true abilities were well-represented, correct classification rates under different research conditions for MIRT, R-RUM, and G-DINA were calculated to examine the person parameter recovery, and the frequency distributions of subskill mastery probability were also compared to show the deviation of the estimated subskill mastery probabilities from the true values in the general value distribution. The MIRT obtained better model-data fit, gained estimated scores being a more reasonable representation for the true abilities, had an advantage on correct classification rates, and showed less deviation from the true values in frequency distributions of subskill mastery probabilities, which means it can produce more accurate diagnostic information about the reading abilities of the test-takers. Considering that more accurate diagnostic information has greater guiding value for the remedial teaching and learning, and in reading diagnoses, the score interpretation will be more reasonable with the MIRT model, this study recommended MIRT as a new methodology for future reading diagnostic analyses. |
topic |
continuous variable diagnostic study multidimensional item response theory model selection reading comprehension test |
url |
https://www.frontiersin.org/articles/10.3389/fpsyg.2021.644764/full |
work_keys_str_mv |
AT huiliu modelselectionforcogitativediagnosticanalysisofthereadingcomprehensiontest AT huiliu modelselectionforcogitativediagnosticanalysisofthereadingcomprehensiontest AT yufangbian modelselectionforcogitativediagnosticanalysisofthereadingcomprehensiontest |
_version_ |
1721209120950845440 |