Using crowdsourcing to evaluate published scientific literature: methods and example.
Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) comp...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2014-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC4079692?pdf=render |
id |
doaj-9ea90bb778ba4eadaf295e1790d01457 |
---|---|
record_format |
Article |
spelling |
doaj-9ea90bb778ba4eadaf295e1790d014572020-11-24T20:50:41ZengPublic Library of Science (PLoS)PLoS ONE1932-62032014-01-0197e10064710.1371/journal.pone.0100647Using crowdsourcing to evaluate published scientific literature: methods and example.Andrew W BrownDavid B AllisonSystematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?" 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low cost, crowdsourcing has potential to evaluate published literature in a cost-effective, quick, and reliable manner using existing, easily accessible resources.http://europepmc.org/articles/PMC4079692?pdf=render |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Andrew W Brown David B Allison |
spellingShingle |
Andrew W Brown David B Allison Using crowdsourcing to evaluate published scientific literature: methods and example. PLoS ONE |
author_facet |
Andrew W Brown David B Allison |
author_sort |
Andrew W Brown |
title |
Using crowdsourcing to evaluate published scientific literature: methods and example. |
title_short |
Using crowdsourcing to evaluate published scientific literature: methods and example. |
title_full |
Using crowdsourcing to evaluate published scientific literature: methods and example. |
title_fullStr |
Using crowdsourcing to evaluate published scientific literature: methods and example. |
title_full_unstemmed |
Using crowdsourcing to evaluate published scientific literature: methods and example. |
title_sort |
using crowdsourcing to evaluate published scientific literature: methods and example. |
publisher |
Public Library of Science (PLoS) |
series |
PLoS ONE |
issn |
1932-6203 |
publishDate |
2014-01-01 |
description |
Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?" 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and low cost, crowdsourcing has potential to evaluate published literature in a cost-effective, quick, and reliable manner using existing, easily accessible resources. |
url |
http://europepmc.org/articles/PMC4079692?pdf=render |
work_keys_str_mv |
AT andrewwbrown usingcrowdsourcingtoevaluatepublishedscientificliteraturemethodsandexample AT davidballison usingcrowdsourcingtoevaluatepublishedscientificliteraturemethodsandexample |
_version_ |
1716803827206717440 |