Quantitatively ranking incorrect responses to multiple-choice questions using item response theory

Research-based assessment instruments (RBAIs) are ubiquitous throughout both physics instruction and physics education research. The vast majority of analyses involving student responses to RBAI questions have focused on whether or not a student selects correct answers and using correctness to measu...

Full description

Bibliographic Details
Main Authors: Trevor I. Smith, Kyle J. Louis, Bartholomew J. Ricci, IV, Nasrine Bendjilali
Format: Article
Language:English
Published: American Physical Society 2020-01-01
Series:Physical Review Physics Education Research
Online Access:http://doi.org/10.1103/PhysRevPhysEducRes.16.010107
id doaj-d6ef31960d5b466a8e20d36006866138
record_format Article
spelling doaj-d6ef31960d5b466a8e20d360068661382020-11-25T01:11:51ZengAmerican Physical SocietyPhysical Review Physics Education Research2469-98962020-01-0116101010710.1103/PhysRevPhysEducRes.16.010107Quantitatively ranking incorrect responses to multiple-choice questions using item response theoryTrevor I. SmithKyle J. LouisBartholomew J. Ricci, IVNasrine BendjilaliResearch-based assessment instruments (RBAIs) are ubiquitous throughout both physics instruction and physics education research. The vast majority of analyses involving student responses to RBAI questions have focused on whether or not a student selects correct answers and using correctness to measure growth. This approach often undervalues the rich information that may be obtained by examining students’ particular choices of incorrect answers. In the present study, we aim to reveal some of this valuable information by quantitatively determining the relative correctness of various incorrect responses. To accomplish this, we propose an assumption that allows us to define relative correctness: students who have a high understanding of Newtonian physics are likely to answer more questions correctly and also more likely to choose better incorrect responses than students who have a low understanding. Analyses using item response theory align with this assumption, and Bock’s nominal response model allows us to uniquely rank each incorrect response. We present results from over 7000 students’ responses to the Force and Motion Conceptual Evaluation.http://doi.org/10.1103/PhysRevPhysEducRes.16.010107
collection DOAJ
language English
format Article
sources DOAJ
author Trevor I. Smith
Kyle J. Louis
Bartholomew J. Ricci, IV
Nasrine Bendjilali
spellingShingle Trevor I. Smith
Kyle J. Louis
Bartholomew J. Ricci, IV
Nasrine Bendjilali
Quantitatively ranking incorrect responses to multiple-choice questions using item response theory
Physical Review Physics Education Research
author_facet Trevor I. Smith
Kyle J. Louis
Bartholomew J. Ricci, IV
Nasrine Bendjilali
author_sort Trevor I. Smith
title Quantitatively ranking incorrect responses to multiple-choice questions using item response theory
title_short Quantitatively ranking incorrect responses to multiple-choice questions using item response theory
title_full Quantitatively ranking incorrect responses to multiple-choice questions using item response theory
title_fullStr Quantitatively ranking incorrect responses to multiple-choice questions using item response theory
title_full_unstemmed Quantitatively ranking incorrect responses to multiple-choice questions using item response theory
title_sort quantitatively ranking incorrect responses to multiple-choice questions using item response theory
publisher American Physical Society
series Physical Review Physics Education Research
issn 2469-9896
publishDate 2020-01-01
description Research-based assessment instruments (RBAIs) are ubiquitous throughout both physics instruction and physics education research. The vast majority of analyses involving student responses to RBAI questions have focused on whether or not a student selects correct answers and using correctness to measure growth. This approach often undervalues the rich information that may be obtained by examining students’ particular choices of incorrect answers. In the present study, we aim to reveal some of this valuable information by quantitatively determining the relative correctness of various incorrect responses. To accomplish this, we propose an assumption that allows us to define relative correctness: students who have a high understanding of Newtonian physics are likely to answer more questions correctly and also more likely to choose better incorrect responses than students who have a low understanding. Analyses using item response theory align with this assumption, and Bock’s nominal response model allows us to uniquely rank each incorrect response. We present results from over 7000 students’ responses to the Force and Motion Conceptual Evaluation.
url http://doi.org/10.1103/PhysRevPhysEducRes.16.010107
work_keys_str_mv AT trevorismith quantitativelyrankingincorrectresponsestomultiplechoicequestionsusingitemresponsetheory
AT kylejlouis quantitativelyrankingincorrectresponsestomultiplechoicequestionsusingitemresponsetheory
AT bartholomewjricciiv quantitativelyrankingincorrectresponsestomultiplechoicequestionsusingitemresponsetheory
AT nasrinebendjilali quantitativelyrankingincorrectresponsestomultiplechoicequestionsusingitemresponsetheory
_version_ 1715831689938206720