Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.

Availability of trained radiologists for fast processing of CXRs in regions burdened with tuberculosis always has been a challenge, affecting both timely diagnosis and patient monitoring. The paucity of annotated images of lungs of TB patients hampers attempts to apply data-oriented algorithms for r...

Full description

Bibliographic Details
Main Authors: Eric Engle, Andrei Gabrielian, Alyssa Long, Darrell E Hurt, Alex Rosenthal
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2020-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0224445
id doaj-e91cb7ecae7b48049fb11f10cf68498a
record_format Article
spelling doaj-e91cb7ecae7b48049fb11f10cf68498a2021-03-03T21:18:48ZengPublic Library of Science (PLoS)PLoS ONE1932-62032020-01-01151e022444510.1371/journal.pone.0224445Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.Eric EngleAndrei GabrielianAlyssa LongDarrell E HurtAlex RosenthalAvailability of trained radiologists for fast processing of CXRs in regions burdened with tuberculosis always has been a challenge, affecting both timely diagnosis and patient monitoring. The paucity of annotated images of lungs of TB patients hampers attempts to apply data-oriented algorithms for research and clinical practices. The TB Portals Program database (TBPP, https://TBPortals.niaid.nih.gov) is a global collaboration curating a large collection of the most dangerous, hard-to-cure drug-resistant tuberculosis (DR-TB) patient cases. TBPP, with 1,179 (83%) DR-TB patient cases, is a unique collection that is well positioned as a testing ground for deep learning classifiers. As of January 2019, the TBPP database contains 1,538 CXRs, of which 346 (22.5%) are annotated by a radiologist and 104 (6.7%) by a pulmonologist-leaving 1,088 (70.7%) CXRs without annotations. The Qure.ai qXR artificial intelligence automated CXR interpretation tool, was blind-tested on the 346 radiologist-annotated CXRs from the TBPP database. Qure.ai qXR CXR predictions for cavity, nodule, pleural effusion, hilar lymphadenopathy was successfully matching human expert annotations. In addition, we tested the 12 Qure.ai classifiers to find whether they correlate with treatment success (information provided by treating physicians). Ten descriptors were found as significant: abnormal CXR (p = 0.0005), pleural effusion (p = 0.048), nodule (p = 0.0004), hilar lymphadenopathy (p = 0.0038), cavity (p = 0.0002), opacity (p = 0.0006), atelectasis (p = 0.0074), consolidation (p = 0.0004), indicator of TB disease (p = < .0001), and fibrosis (p = < .0001). We conclude that applying fully automated Qure.ai CXR analysis tool is useful for fast, accurate, uniform, large-scale CXR annotation assistance, as it performed well even for DR-TB cases that were not used for initial training. Testing artificial intelligence algorithms (encapsulating both machine learning and deep learning classifiers) on diverse data collections, such as TBPP, is critically important toward progressing to clinically adopted automatic assistants for medical data analysis.https://doi.org/10.1371/journal.pone.0224445
collection DOAJ
language English
format Article
sources DOAJ
author Eric Engle
Andrei Gabrielian
Alyssa Long
Darrell E Hurt
Alex Rosenthal
spellingShingle Eric Engle
Andrei Gabrielian
Alyssa Long
Darrell E Hurt
Alex Rosenthal
Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.
PLoS ONE
author_facet Eric Engle
Andrei Gabrielian
Alyssa Long
Darrell E Hurt
Alex Rosenthal
author_sort Eric Engle
title Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.
title_short Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.
title_full Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.
title_fullStr Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.
title_full_unstemmed Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.
title_sort performance of qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2020-01-01
description Availability of trained radiologists for fast processing of CXRs in regions burdened with tuberculosis always has been a challenge, affecting both timely diagnosis and patient monitoring. The paucity of annotated images of lungs of TB patients hampers attempts to apply data-oriented algorithms for research and clinical practices. The TB Portals Program database (TBPP, https://TBPortals.niaid.nih.gov) is a global collaboration curating a large collection of the most dangerous, hard-to-cure drug-resistant tuberculosis (DR-TB) patient cases. TBPP, with 1,179 (83%) DR-TB patient cases, is a unique collection that is well positioned as a testing ground for deep learning classifiers. As of January 2019, the TBPP database contains 1,538 CXRs, of which 346 (22.5%) are annotated by a radiologist and 104 (6.7%) by a pulmonologist-leaving 1,088 (70.7%) CXRs without annotations. The Qure.ai qXR artificial intelligence automated CXR interpretation tool, was blind-tested on the 346 radiologist-annotated CXRs from the TBPP database. Qure.ai qXR CXR predictions for cavity, nodule, pleural effusion, hilar lymphadenopathy was successfully matching human expert annotations. In addition, we tested the 12 Qure.ai classifiers to find whether they correlate with treatment success (information provided by treating physicians). Ten descriptors were found as significant: abnormal CXR (p = 0.0005), pleural effusion (p = 0.048), nodule (p = 0.0004), hilar lymphadenopathy (p = 0.0038), cavity (p = 0.0002), opacity (p = 0.0006), atelectasis (p = 0.0074), consolidation (p = 0.0004), indicator of TB disease (p = < .0001), and fibrosis (p = < .0001). We conclude that applying fully automated Qure.ai CXR analysis tool is useful for fast, accurate, uniform, large-scale CXR annotation assistance, as it performed well even for DR-TB cases that were not used for initial training. Testing artificial intelligence algorithms (encapsulating both machine learning and deep learning classifiers) on diverse data collections, such as TBPP, is critically important toward progressing to clinically adopted automatic assistants for medical data analysis.
url https://doi.org/10.1371/journal.pone.0224445
work_keys_str_mv AT ericengle performanceofqureaiautomaticclassifiersagainstalargeannotateddatabaseofpatientswithdiverseformsoftuberculosis
AT andreigabrielian performanceofqureaiautomaticclassifiersagainstalargeannotateddatabaseofpatientswithdiverseformsoftuberculosis
AT alyssalong performanceofqureaiautomaticclassifiersagainstalargeannotateddatabaseofpatientswithdiverseformsoftuberculosis
AT darrellehurt performanceofqureaiautomaticclassifiersagainstalargeannotateddatabaseofpatientswithdiverseformsoftuberculosis
AT alexrosenthal performanceofqureaiautomaticclassifiersagainstalargeannotateddatabaseofpatientswithdiverseformsoftuberculosis
_version_ 1714817645776207872