Statistical stopping criteria for automated screening in systematic reviews

Abstract Active learning for systematic review screening promises to reduce the human effort required to identify relevant documents for a systematic review. Machines and humans work together, with humans providing training data, and the machine optimising the documents that the humans screen. This...

Full description

Bibliographic Details
Main Authors: Max W Callaghan, Finn Müller-Hansen
Format: Article
Language:English
Published: BMC 2020-11-01
Series:Systematic Reviews
Subjects:
Online Access:https://doi.org/10.1186/s13643-020-01521-4
Description
Summary:Abstract Active learning for systematic review screening promises to reduce the human effort required to identify relevant documents for a systematic review. Machines and humans work together, with humans providing training data, and the machine optimising the documents that the humans screen. This enables the identification of all relevant documents after viewing only a fraction of the total documents. However, current approaches lack robust stopping criteria, so that reviewers do not know when they have seen all or a certain proportion of relevant documents. This means that such systems are hard to implement in live reviews. This paper introduces a workflow with flexible statistical stopping criteria, which offer real work reductions on the basis of rejecting a hypothesis of having missed a given recall target with a given level of confidence. The stopping criteria are shown on test datasets to achieve a reliable level of recall, while still providing work reductions of on average 17%. Other methods proposed previously are shown to provide inconsistent recall and work reductions across datasets.
ISSN:2046-4053