How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives

Despite nearly two decades of research, researchers have not resolved whether people generally perceive their skills accurately or inaccurately. In this paper, we trace this lack of resolution to numeracy, specifically to the frequently overlooked complications that arise from the noisy data produce...

Full description

Bibliographic Details
Main Authors: Edward Nuhfer, Steven Fleisher, Christopher Cogan, Karl Wirth, Eric Gaze
Format: Article
Language:English
Published: National Numeracy Network 2017-01-01
Series:Numeracy
Subjects:
Online Access:http://scholarcommons.usf.edu/numeracy/vol10/iss1/art4/
id doaj-0467a99eb19b44a7814ea2d1082b485e
record_format Article
spelling doaj-0467a99eb19b44a7814ea2d1082b485e2020-11-24T23:43:27ZengNational Numeracy NetworkNumeracy1936-46601936-46602017-01-011014http://dx.doi.org/10.5038/1936-4660.10.1.4How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better AlternativesEdward Nuhfer0Steven Fleisher1Christopher Cogan2Karl Wirth3Eric Gaze4California State University (retired)California State University - Channel IslandsVentura CollegeMacalester CollegeBowdoin CollegeDespite nearly two decades of research, researchers have not resolved whether people generally perceive their skills accurately or inaccurately. In this paper, we trace this lack of resolution to numeracy, specifically to the frequently overlooked complications that arise from the noisy data produced by the paired measures that researchers employ to determine self-assessment accuracy. To illustrate the complications and ways to resolve them, we employ a large dataset (N = 1154) obtained from paired measures of documented reliability to study self-assessed proficiency in science literacy. We collected demographic information that allowed both criterion-referenced and normative-based analyses of self-assessment data. We used these analyses to propose a quantitatively based classification scale and show how its use informs the nature of self-assessment. Much of the current consensus about peoples' inability to self-assess accurately comes from interpreting normative data presented in the Kruger-Dunning type graphical format or closely related (y - x) vs. (x) graphical conventions. Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment. Our results further confirm that experts are more proficient in self-assessing their abilities than novices and that women, in general, self-assess more accurately than men. The validity of interpretations of data depends strongly upon how carefully the researchers consider the numeracy that underlies graphical presentations and conclusions. Our results indicate that carefully measured self-assessments provide valid, measurable and valuable information about proficiency. http://scholarcommons.usf.edu/numeracy/vol10/iss1/art4/self-assessmentself-assessment classification scaleDunning-Kruger Effectknowledge surveysgraphsnumeracyrandom number simulationnoise
collection DOAJ
language English
format Article
sources DOAJ
author Edward Nuhfer
Steven Fleisher
Christopher Cogan
Karl Wirth
Eric Gaze
spellingShingle Edward Nuhfer
Steven Fleisher
Christopher Cogan
Karl Wirth
Eric Gaze
How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives
Numeracy
self-assessment
self-assessment classification scale
Dunning-Kruger Effect
knowledge surveys
graphs
numeracy
random number simulation
noise
author_facet Edward Nuhfer
Steven Fleisher
Christopher Cogan
Karl Wirth
Eric Gaze
author_sort Edward Nuhfer
title How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives
title_short How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives
title_full How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives
title_fullStr How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives
title_full_unstemmed How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives
title_sort how random noise and a graphical convention subverted behavioral scientists' explanations of self-assessment data: numeracy underlies better alternatives
publisher National Numeracy Network
series Numeracy
issn 1936-4660
1936-4660
publishDate 2017-01-01
description Despite nearly two decades of research, researchers have not resolved whether people generally perceive their skills accurately or inaccurately. In this paper, we trace this lack of resolution to numeracy, specifically to the frequently overlooked complications that arise from the noisy data produced by the paired measures that researchers employ to determine self-assessment accuracy. To illustrate the complications and ways to resolve them, we employ a large dataset (N = 1154) obtained from paired measures of documented reliability to study self-assessed proficiency in science literacy. We collected demographic information that allowed both criterion-referenced and normative-based analyses of self-assessment data. We used these analyses to propose a quantitatively based classification scale and show how its use informs the nature of self-assessment. Much of the current consensus about peoples' inability to self-assess accurately comes from interpreting normative data presented in the Kruger-Dunning type graphical format or closely related (y - x) vs. (x) graphical conventions. Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment. Our results further confirm that experts are more proficient in self-assessing their abilities than novices and that women, in general, self-assess more accurately than men. The validity of interpretations of data depends strongly upon how carefully the researchers consider the numeracy that underlies graphical presentations and conclusions. Our results indicate that carefully measured self-assessments provide valid, measurable and valuable information about proficiency.
topic self-assessment
self-assessment classification scale
Dunning-Kruger Effect
knowledge surveys
graphs
numeracy
random number simulation
noise
url http://scholarcommons.usf.edu/numeracy/vol10/iss1/art4/
work_keys_str_mv AT edwardnuhfer howrandomnoiseandagraphicalconventionsubvertedbehavioralscientistsexplanationsofselfassessmentdatanumeracyunderliesbetteralternatives
AT stevenfleisher howrandomnoiseandagraphicalconventionsubvertedbehavioralscientistsexplanationsofselfassessmentdatanumeracyunderliesbetteralternatives
AT christophercogan howrandomnoiseandagraphicalconventionsubvertedbehavioralscientistsexplanationsofselfassessmentdatanumeracyunderliesbetteralternatives
AT karlwirth howrandomnoiseandagraphicalconventionsubvertedbehavioralscientistsexplanationsofselfassessmentdatanumeracyunderliesbetteralternatives
AT ericgaze howrandomnoiseandagraphicalconventionsubvertedbehavioralscientistsexplanationsofselfassessmentdatanumeracyunderliesbetteralternatives
_version_ 1725501557976334336