Measurements of cognitive skill by survey mode: Marginal differences and scaling similarities

This paper addresses how measurements of cognitive skill differ based on survey mode, from a face-to-face interview to a self-completed survey, using the Wordsum vocabulary test found in the General Social Survey. The Wordsum acts as a proxy for general cognitive skill, and it has been used to predi...

Full description

Bibliographic Details
Main Author: Andrew Gooch
Format: Article
Language:English
Published: SAGE Publishing 2015-07-01
Series:Research & Politics
Online Access:https://doi.org/10.1177/2053168015590681
id doaj-809d1defe0544650be22da2d44eb321a
record_format Article
spelling doaj-809d1defe0544650be22da2d44eb321a2020-11-25T03:24:48ZengSAGE PublishingResearch & Politics2053-16802015-07-01210.1177/205316801559068110.1177_2053168015590681Measurements of cognitive skill by survey mode: Marginal differences and scaling similaritiesAndrew GoochThis paper addresses how measurements of cognitive skill differ based on survey mode, from a face-to-face interview to a self-completed survey, using the Wordsum vocabulary test found in the General Social Survey. The Wordsum acts as a proxy for general cognitive skill, and it has been used to predict a variety of political variables. Therefore, knowing differences in cognitive skill by mode are important for political science research because of the proliferation of self-completed Internet surveys. I leverage a large-scale mode experiment that randomizes a general population sample into a face-to-face or self-completed interview. Results show that historically easy questions are more likely to yield correct answers in the face-to-face treatment, but modest-to-difficult test questions have a higher rate of correct answers in the self-completed treatment (marginal distributions). A cognitive skill scale using item response theory, however, does not differ by mode because the ordering of ideal points does not change from a face-to-face interview to a self-completed survey. When applying the scale to a well-established model of party identification, I show no difference by mode, suggesting that a transition from face-to-face interviews to self-completed surveys may not alter conclusion drawn from models that use the Wordsum test.https://doi.org/10.1177/2053168015590681
collection DOAJ
language English
format Article
sources DOAJ
author Andrew Gooch
spellingShingle Andrew Gooch
Measurements of cognitive skill by survey mode: Marginal differences and scaling similarities
Research & Politics
author_facet Andrew Gooch
author_sort Andrew Gooch
title Measurements of cognitive skill by survey mode: Marginal differences and scaling similarities
title_short Measurements of cognitive skill by survey mode: Marginal differences and scaling similarities
title_full Measurements of cognitive skill by survey mode: Marginal differences and scaling similarities
title_fullStr Measurements of cognitive skill by survey mode: Marginal differences and scaling similarities
title_full_unstemmed Measurements of cognitive skill by survey mode: Marginal differences and scaling similarities
title_sort measurements of cognitive skill by survey mode: marginal differences and scaling similarities
publisher SAGE Publishing
series Research & Politics
issn 2053-1680
publishDate 2015-07-01
description This paper addresses how measurements of cognitive skill differ based on survey mode, from a face-to-face interview to a self-completed survey, using the Wordsum vocabulary test found in the General Social Survey. The Wordsum acts as a proxy for general cognitive skill, and it has been used to predict a variety of political variables. Therefore, knowing differences in cognitive skill by mode are important for political science research because of the proliferation of self-completed Internet surveys. I leverage a large-scale mode experiment that randomizes a general population sample into a face-to-face or self-completed interview. Results show that historically easy questions are more likely to yield correct answers in the face-to-face treatment, but modest-to-difficult test questions have a higher rate of correct answers in the self-completed treatment (marginal distributions). A cognitive skill scale using item response theory, however, does not differ by mode because the ordering of ideal points does not change from a face-to-face interview to a self-completed survey. When applying the scale to a well-established model of party identification, I show no difference by mode, suggesting that a transition from face-to-face interviews to self-completed surveys may not alter conclusion drawn from models that use the Wordsum test.
url https://doi.org/10.1177/2053168015590681
work_keys_str_mv AT andrewgooch measurementsofcognitiveskillbysurveymodemarginaldifferencesandscalingsimilarities
_version_ 1724599760704241664