What makes experts reliable? Expert reliability and the estimation of latent traits

Experts code latent quantities for many influential political science datasets. Although scholars are aware of the importance of accounting for variation in expert reliability when aggregating such data, they have not systematically explored either the factors affecting expert reliability or the deg...

Full description

Bibliographic Details
Main Authors: Kyle L. Marquardt, Daniel Pemstein, Brigitte Seim, Yi-ting Wang
Format: Article
Language:English
Published: SAGE Publishing 2019-10-01
Series:Research & Politics
Online Access:https://doi.org/10.1177/2053168019879561
id doaj-0ca41721e51e482cb82ad985ec56fb9a
record_format Article
spelling doaj-0ca41721e51e482cb82ad985ec56fb9a2020-11-25T04:01:31ZengSAGE PublishingResearch & Politics2053-16802019-10-01610.1177/2053168019879561What makes experts reliable? Expert reliability and the estimation of latent traitsKyle L. Marquardt0Daniel Pemstein1Brigitte Seim2Yi-ting Wang3School of Politics and Governance and International Center for the Study of Institutions and Development, National Research University Higher School of Economics, Russian FederationDepartment of Criminal Justice and Political Science, North Dakota State University, USADepartment of Public Policy, University of North Carolina, Chapel Hill, USADepartment of Political Science, National Cheng Kung University, TaiwanExperts code latent quantities for many influential political science datasets. Although scholars are aware of the importance of accounting for variation in expert reliability when aggregating such data, they have not systematically explored either the factors affecting expert reliability or the degree to which these factors influence estimates of latent concepts. Here we provide a template for examining potential correlates of expert reliability, using coder-level data for six randomly selected variables from a cross-national panel dataset. We aggregate these data with an ordinal item response theory model that parameterizes expert reliability, and regress the resulting reliability estimates on both expert demographic characteristics and measures of their coding behavior. We find little evidence of a consistent substantial relationship between most expert characteristics and reliability, and these null results extend to potentially problematic sources of bias in estimates, such as gender. The exceptions to these results are intuitive, and provide baseline guidance for expert recruitment and retention in future expert coding projects: attentive and confident experts who have contextual knowledge tend to be more reliable. Taken as a whole, these findings reinforce arguments that item response theory models are a relatively safe method for aggregating expert-coded data.https://doi.org/10.1177/2053168019879561
collection DOAJ
language English
format Article
sources DOAJ
author Kyle L. Marquardt
Daniel Pemstein
Brigitte Seim
Yi-ting Wang
spellingShingle Kyle L. Marquardt
Daniel Pemstein
Brigitte Seim
Yi-ting Wang
What makes experts reliable? Expert reliability and the estimation of latent traits
Research & Politics
author_facet Kyle L. Marquardt
Daniel Pemstein
Brigitte Seim
Yi-ting Wang
author_sort Kyle L. Marquardt
title What makes experts reliable? Expert reliability and the estimation of latent traits
title_short What makes experts reliable? Expert reliability and the estimation of latent traits
title_full What makes experts reliable? Expert reliability and the estimation of latent traits
title_fullStr What makes experts reliable? Expert reliability and the estimation of latent traits
title_full_unstemmed What makes experts reliable? Expert reliability and the estimation of latent traits
title_sort what makes experts reliable? expert reliability and the estimation of latent traits
publisher SAGE Publishing
series Research & Politics
issn 2053-1680
publishDate 2019-10-01
description Experts code latent quantities for many influential political science datasets. Although scholars are aware of the importance of accounting for variation in expert reliability when aggregating such data, they have not systematically explored either the factors affecting expert reliability or the degree to which these factors influence estimates of latent concepts. Here we provide a template for examining potential correlates of expert reliability, using coder-level data for six randomly selected variables from a cross-national panel dataset. We aggregate these data with an ordinal item response theory model that parameterizes expert reliability, and regress the resulting reliability estimates on both expert demographic characteristics and measures of their coding behavior. We find little evidence of a consistent substantial relationship between most expert characteristics and reliability, and these null results extend to potentially problematic sources of bias in estimates, such as gender. The exceptions to these results are intuitive, and provide baseline guidance for expert recruitment and retention in future expert coding projects: attentive and confident experts who have contextual knowledge tend to be more reliable. Taken as a whole, these findings reinforce arguments that item response theory models are a relatively safe method for aggregating expert-coded data.
url https://doi.org/10.1177/2053168019879561
work_keys_str_mv AT kylelmarquardt whatmakesexpertsreliableexpertreliabilityandtheestimationoflatenttraits
AT danielpemstein whatmakesexpertsreliableexpertreliabilityandtheestimationoflatenttraits
AT brigitteseim whatmakesexpertsreliableexpertreliabilityandtheestimationoflatenttraits
AT yitingwang whatmakesexpertsreliableexpertreliabilityandtheestimationoflatenttraits
_version_ 1724446508336545792