Classification accuracy of mixed format tests: A bi-factor approach

Mixed format tests (e.g., a test consisting of multiple-choice [MC] items and constructed response [CR] items) have become increasingly popular. However, the latent structure of item pools consisting of the two formats is still equivocal. Moreover, the implications of this latent structure are uncle...

Full description

Bibliographic Details
Main Authors: Wei eWang, Fritz eDrasgow, Liwen of illinois, Liu
Format: Article
Language:English
Published: Frontiers Media S.A. 2016-02-01
Series:Frontiers in Psychology
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fpsyg.2016.00270/full
id doaj-98d85dd15a7b4740a4cf237e2cb8b6fd
record_format Article
spelling doaj-98d85dd15a7b4740a4cf237e2cb8b6fd2020-11-24T21:06:38ZengFrontiers Media S.A.Frontiers in Psychology1664-10782016-02-01710.3389/fpsyg.2016.00270183244Classification accuracy of mixed format tests: A bi-factor approachWei eWang0Fritz eDrasgow1Liwen of illinois, Liu2University of Central FloridaUniversity of Illinois at Urbana-ChampaignUniversity of Illinois at Urbana-ChampaignMixed format tests (e.g., a test consisting of multiple-choice [MC] items and constructed response [CR] items) have become increasingly popular. However, the latent structure of item pools consisting of the two formats is still equivocal. Moreover, the implications of this latent structure are unclear: For example, do constructed response items tap reasoning skills that cannot be assessed with multiple choice items? This study explored the dimensionality of mixed format tests by applying bi-factor models to ten tests of various subjects from the College Board’s Advanced Placement (AP) Program and compared the accuracy of scores based on the bi-factor analysis with scores derived from a unidimensional analysis. More importantly, this study focused on a practical and important question––classification accuracy of the overall grade on a mixed format test. Our findings revealed that the degree of multidimensionality resulting from the mixed item format varied from subject to subject, depending on the disattenuated correlation between scores from MC and CR subtests. Moreover, remarkably small decrements in classification accuracy were found for the unidimensional analysis when the disattenuated correlations exceeded .90.http://journal.frontiersin.org/Journal/10.3389/fpsyg.2016.00270/fullitem response theoryClassification AccuracyBi-factor modelconstructed response itemsmixed format test
collection DOAJ
language English
format Article
sources DOAJ
author Wei eWang
Fritz eDrasgow
Liwen of illinois, Liu
spellingShingle Wei eWang
Fritz eDrasgow
Liwen of illinois, Liu
Classification accuracy of mixed format tests: A bi-factor approach
Frontiers in Psychology
item response theory
Classification Accuracy
Bi-factor model
constructed response items
mixed format test
author_facet Wei eWang
Fritz eDrasgow
Liwen of illinois, Liu
author_sort Wei eWang
title Classification accuracy of mixed format tests: A bi-factor approach
title_short Classification accuracy of mixed format tests: A bi-factor approach
title_full Classification accuracy of mixed format tests: A bi-factor approach
title_fullStr Classification accuracy of mixed format tests: A bi-factor approach
title_full_unstemmed Classification accuracy of mixed format tests: A bi-factor approach
title_sort classification accuracy of mixed format tests: a bi-factor approach
publisher Frontiers Media S.A.
series Frontiers in Psychology
issn 1664-1078
publishDate 2016-02-01
description Mixed format tests (e.g., a test consisting of multiple-choice [MC] items and constructed response [CR] items) have become increasingly popular. However, the latent structure of item pools consisting of the two formats is still equivocal. Moreover, the implications of this latent structure are unclear: For example, do constructed response items tap reasoning skills that cannot be assessed with multiple choice items? This study explored the dimensionality of mixed format tests by applying bi-factor models to ten tests of various subjects from the College Board’s Advanced Placement (AP) Program and compared the accuracy of scores based on the bi-factor analysis with scores derived from a unidimensional analysis. More importantly, this study focused on a practical and important question––classification accuracy of the overall grade on a mixed format test. Our findings revealed that the degree of multidimensionality resulting from the mixed item format varied from subject to subject, depending on the disattenuated correlation between scores from MC and CR subtests. Moreover, remarkably small decrements in classification accuracy were found for the unidimensional analysis when the disattenuated correlations exceeded .90.
topic item response theory
Classification Accuracy
Bi-factor model
constructed response items
mixed format test
url http://journal.frontiersin.org/Journal/10.3389/fpsyg.2016.00270/full
work_keys_str_mv AT weiewang classificationaccuracyofmixedformattestsabifactorapproach
AT fritzedrasgow classificationaccuracyofmixedformattestsabifactorapproach
AT liwenofillinoisliu classificationaccuracyofmixedformattestsabifactorapproach
_version_ 1716765146615906304