More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.

Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, vario...

Full description

Bibliographic Details
Main Authors: Marc Höglinger, Ben Jann
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2018-01-01
Series:PLoS ONE
Online Access:http://europepmc.org/articles/PMC6091935?pdf=render
id doaj-2a8e799a050748cebfa560916a63c6a6
record_format Article
spelling doaj-2a8e799a050748cebfa560916a63c6a62020-11-25T02:06:08ZengPublic Library of Science (PLoS)PLoS ONE1932-62032018-01-01138e020177010.1371/journal.pone.0201770More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.Marc HöglingerBen JannSocial desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, various indirect question techniques, such as the randomized response technique (RRT), have been proposed. We evaluate the viability of several popular variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents' self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6, 505). Our results from two validation designs indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT we do observe a reduction of false negatives. At the same time, however, there is a non-ignorable increase in false positives; a flaw that previous evaluation studies relying on comparative or aggregate-level validation could not detect. Overall, none of the evaluated indirect techniques outperformed conventional direct questioning. Furthermore, our study demonstrates the importance of identifying false negatives as well as false positives to avoid false conclusions about the validity of indirect sensitive question techniques.http://europepmc.org/articles/PMC6091935?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Marc Höglinger
Ben Jann
spellingShingle Marc Höglinger
Ben Jann
More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.
PLoS ONE
author_facet Marc Höglinger
Ben Jann
author_sort Marc Höglinger
title More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.
title_short More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.
title_full More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.
title_fullStr More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.
title_full_unstemmed More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.
title_sort more is not always better: an experimental individual-level validation of the randomized response technique and the crosswise model.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2018-01-01
description Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, various indirect question techniques, such as the randomized response technique (RRT), have been proposed. We evaluate the viability of several popular variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents' self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6, 505). Our results from two validation designs indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT we do observe a reduction of false negatives. At the same time, however, there is a non-ignorable increase in false positives; a flaw that previous evaluation studies relying on comparative or aggregate-level validation could not detect. Overall, none of the evaluated indirect techniques outperformed conventional direct questioning. Furthermore, our study demonstrates the importance of identifying false negatives as well as false positives to avoid false conclusions about the validity of indirect sensitive question techniques.
url http://europepmc.org/articles/PMC6091935?pdf=render
work_keys_str_mv AT marchoglinger moreisnotalwaysbetteranexperimentalindividuallevelvalidationoftherandomizedresponsetechniqueandthecrosswisemodel
AT benjann moreisnotalwaysbetteranexperimentalindividuallevelvalidationoftherandomizedresponsetechniqueandthecrosswisemodel
_version_ 1724934837769338880