Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency

Self-assessment measures of competency are blends of an authentic self-assessment signal that researchers seek to measure and random disorder or "noise" that accompanies that signal. In this study, we use random number simulations to explore how random noise affects critical aspects of sel...

Full description

Bibliographic Details
Main Authors: Edward Nuhfer, Christopher Cogan, Steven Fleisher, Eric Gaze, Karl Wirth
Format: Article
Language:English
Published: National Numeracy Network 2016-01-01
Series:Numeracy
Subjects:
Online Access:http://scholarcommons.usf.edu/numeracy/vol9/iss1/art4/
id doaj-41b83d0257ce4155ac39822a3fb6588c
record_format Article
spelling doaj-41b83d0257ce4155ac39822a3fb6588c2020-11-24T20:59:36ZengNational Numeracy NetworkNumeracy1936-46601936-46602016-01-01914http://dx.doi.org/10.5038/1936-4660.9.1.4Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed CompetencyEdward Nuhfer0Christopher Cogan1Steven Fleisher2Eric Gaze3Karl Wirth4California State University (retired)Independent ConsultantCalifornia State University - Channel IslandsBowdoin CollegeMacalester CollegeSelf-assessment measures of competency are blends of an authentic self-assessment signal that researchers seek to measure and random disorder or "noise" that accompanies that signal. In this study, we use random number simulations to explore how random noise affects critical aspects of self-assessment investigations: reliability, correlation, critical sample size, and the graphical representations of self-assessment data. We show that graphical conventions common in the self-assessment literature introduce artifacts that invite misinterpretation. Troublesome conventions include: (y minus x) vs. (x) scatterplots; (y minus x) vs. (x) column graphs aggregated as quantiles; line charts that display data aggregated as quantiles; and some histograms. Graphical conventions that generate minimal artifacts include scatterplots with a best-fit line that depict (y) vs. (x) measures (self-assessed competence vs. measured competence) plotted by individual participant scores, and (y) vs. (x) scatterplots of collective average measures of all participants plotted item-by-item. This last graphic convention attenuates noise and improves the definition of the signal. To provide relevant comparisons across varied graphical conventions, we use a single dataset derived from paired measures of 1154 participants' self-assessed competence and demonstrated competence in science literacy. Our results show that different numerical approaches employed in investigating and describing self-assessment accuracy are not equally valid. By modeling this dataset with random numbers, we show how recognizing the varied expressions of randomness in self-assessment data can improve the validity of numeracy-based descriptions of self-assessment. http://scholarcommons.usf.edu/numeracy/vol9/iss1/art4/self-assessmentDunning-Kruger Effectknowledge surveysreliabilitygraphsnumeracyrandom number simulationnoisesignal
collection DOAJ
language English
format Article
sources DOAJ
author Edward Nuhfer
Christopher Cogan
Steven Fleisher
Eric Gaze
Karl Wirth
spellingShingle Edward Nuhfer
Christopher Cogan
Steven Fleisher
Eric Gaze
Karl Wirth
Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency
Numeracy
self-assessment
Dunning-Kruger Effect
knowledge surveys
reliability
graphs
numeracy
random number simulation
noise
signal
author_facet Edward Nuhfer
Christopher Cogan
Steven Fleisher
Eric Gaze
Karl Wirth
author_sort Edward Nuhfer
title Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency
title_short Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency
title_full Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency
title_fullStr Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency
title_full_unstemmed Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency
title_sort random number simulations reveal how random noise affects the measurements and graphical portrayals of self-assessed competency
publisher National Numeracy Network
series Numeracy
issn 1936-4660
1936-4660
publishDate 2016-01-01
description Self-assessment measures of competency are blends of an authentic self-assessment signal that researchers seek to measure and random disorder or "noise" that accompanies that signal. In this study, we use random number simulations to explore how random noise affects critical aspects of self-assessment investigations: reliability, correlation, critical sample size, and the graphical representations of self-assessment data. We show that graphical conventions common in the self-assessment literature introduce artifacts that invite misinterpretation. Troublesome conventions include: (y minus x) vs. (x) scatterplots; (y minus x) vs. (x) column graphs aggregated as quantiles; line charts that display data aggregated as quantiles; and some histograms. Graphical conventions that generate minimal artifacts include scatterplots with a best-fit line that depict (y) vs. (x) measures (self-assessed competence vs. measured competence) plotted by individual participant scores, and (y) vs. (x) scatterplots of collective average measures of all participants plotted item-by-item. This last graphic convention attenuates noise and improves the definition of the signal. To provide relevant comparisons across varied graphical conventions, we use a single dataset derived from paired measures of 1154 participants' self-assessed competence and demonstrated competence in science literacy. Our results show that different numerical approaches employed in investigating and describing self-assessment accuracy are not equally valid. By modeling this dataset with random numbers, we show how recognizing the varied expressions of randomness in self-assessment data can improve the validity of numeracy-based descriptions of self-assessment.
topic self-assessment
Dunning-Kruger Effect
knowledge surveys
reliability
graphs
numeracy
random number simulation
noise
signal
url http://scholarcommons.usf.edu/numeracy/vol9/iss1/art4/
work_keys_str_mv AT edwardnuhfer randomnumbersimulationsrevealhowrandomnoiseaffectsthemeasurementsandgraphicalportrayalsofselfassessedcompetency
AT christophercogan randomnumbersimulationsrevealhowrandomnoiseaffectsthemeasurementsandgraphicalportrayalsofselfassessedcompetency
AT stevenfleisher randomnumbersimulationsrevealhowrandomnoiseaffectsthemeasurementsandgraphicalportrayalsofselfassessedcompetency
AT ericgaze randomnumbersimulationsrevealhowrandomnoiseaffectsthemeasurementsandgraphicalportrayalsofselfassessedcompetency
AT karlwirth randomnumbersimulationsrevealhowrandomnoiseaffectsthemeasurementsandgraphicalportrayalsofselfassessedcompetency
_version_ 1716782204118368256