Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references
Abstract Background Systematic reviews involve searching multiple bibliographic databases to identify eligible studies. As this type of evidence synthesis is increasingly pursued, the use of various electronic platforms can help researchers improve the efficiency and quality of their research. We ex...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2021-01-01
|
Series: | Systematic Reviews |
Subjects: | |
Online Access: | https://doi.org/10.1186/s13643-021-01583-y |
id |
doaj-0ef0708714364f43b42c57c61e1cb13a |
---|---|
record_format |
Article |
spelling |
doaj-0ef0708714364f43b42c57c61e1cb13a2021-01-24T12:09:06ZengBMCSystematic Reviews2046-40532021-01-011011810.1186/s13643-021-01583-yConsiderations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating referencesSandra McKeown0Zuhaib M. Mir1Bracken Health Sciences Library, Queen’s UniversityDepartments of Surgery and Public Health Sciences, Queen’s University & Kingston Health Sciences CentreAbstract Background Systematic reviews involve searching multiple bibliographic databases to identify eligible studies. As this type of evidence synthesis is increasingly pursued, the use of various electronic platforms can help researchers improve the efficiency and quality of their research. We examined the accuracy and efficiency of commonly used electronic methods for flagging and removing duplicate references during this process. Methods A heterogeneous sample of references was obtained by conducting a similar topical search in MEDLINE, Embase, Cochrane Central Register of Controlled Trials, and PsycINFO databases. References were de-duplicated via manual abstraction to create a benchmark set. The default settings were then used in Ovid multifile search, EndNote desktop, Mendeley, Zotero, Covidence, and Rayyan to de-duplicate the sample of references independently. Using the benchmark set as reference, the number of false-negative and false-positive duplicate references for each method was identified, and accuracy, sensitivity, and specificity were determined. Results We found that the most accurate methods for identifying duplicate references were Ovid, Covidence, and Rayyan. Ovid and Covidence possessed the highest specificity for identifying duplicate references, while Rayyan demonstrated the highest sensitivity. Conclusion This study reveals the strengths and weaknesses of commonly used de-duplication methods and provides strategies for improving their performance to avoid unintentionally removing eligible studies and introducing bias into systematic reviews. Along with availability, ease-of-use, functionality, and capability, these findings are important to consider when researchers are selecting database platforms and supporting software programs for conducting systematic reviews.https://doi.org/10.1186/s13643-021-01583-yBibliographic databasesDe-duplicationDuplicate referencesReference management softwareStudy designSystematic review software |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Sandra McKeown Zuhaib M. Mir |
spellingShingle |
Sandra McKeown Zuhaib M. Mir Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references Systematic Reviews Bibliographic databases De-duplication Duplicate references Reference management software Study design Systematic review software |
author_facet |
Sandra McKeown Zuhaib M. Mir |
author_sort |
Sandra McKeown |
title |
Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references |
title_short |
Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references |
title_full |
Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references |
title_fullStr |
Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references |
title_full_unstemmed |
Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references |
title_sort |
considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references |
publisher |
BMC |
series |
Systematic Reviews |
issn |
2046-4053 |
publishDate |
2021-01-01 |
description |
Abstract Background Systematic reviews involve searching multiple bibliographic databases to identify eligible studies. As this type of evidence synthesis is increasingly pursued, the use of various electronic platforms can help researchers improve the efficiency and quality of their research. We examined the accuracy and efficiency of commonly used electronic methods for flagging and removing duplicate references during this process. Methods A heterogeneous sample of references was obtained by conducting a similar topical search in MEDLINE, Embase, Cochrane Central Register of Controlled Trials, and PsycINFO databases. References were de-duplicated via manual abstraction to create a benchmark set. The default settings were then used in Ovid multifile search, EndNote desktop, Mendeley, Zotero, Covidence, and Rayyan to de-duplicate the sample of references independently. Using the benchmark set as reference, the number of false-negative and false-positive duplicate references for each method was identified, and accuracy, sensitivity, and specificity were determined. Results We found that the most accurate methods for identifying duplicate references were Ovid, Covidence, and Rayyan. Ovid and Covidence possessed the highest specificity for identifying duplicate references, while Rayyan demonstrated the highest sensitivity. Conclusion This study reveals the strengths and weaknesses of commonly used de-duplication methods and provides strategies for improving their performance to avoid unintentionally removing eligible studies and introducing bias into systematic reviews. Along with availability, ease-of-use, functionality, and capability, these findings are important to consider when researchers are selecting database platforms and supporting software programs for conducting systematic reviews. |
topic |
Bibliographic databases De-duplication Duplicate references Reference management software Study design Systematic review software |
url |
https://doi.org/10.1186/s13643-021-01583-y |
work_keys_str_mv |
AT sandramckeown considerationsforconductingsystematicreviewsevaluatingtheperformanceofdifferentmethodsfordeduplicatingreferences AT zuhaibmmir considerationsforconductingsystematicreviewsevaluatingtheperformanceofdifferentmethodsfordeduplicatingreferences |
_version_ |
1724326294761504768 |