Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification
We conducted two analyses by comparing the transferability of a traditionally transfer-learned CNN (TL) to that of a CNN fine-tuned with an unrelated set of medical images (mammograms in this study) first and then fine-tuned a second time using TL, which we call the cross-organ, cross-modality trans...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9262863/ |
id |
doaj-f4e7b7866c364bed9a48ebcffe4d30f5 |
---|---|
record_format |
Article |
spelling |
doaj-f4e7b7866c364bed9a48ebcffe4d30f52021-03-30T04:54:48ZengIEEEIEEE Access2169-35362020-01-01821019421020510.1109/ACCESS.2020.30389099262863Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and ClassificationJuhun Lee0https://orcid.org/0000-0001-7151-0540Robert M. Nishikawa1Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USADepartment of Radiology, University of Pittsburgh, Pittsburgh, PA, USAWe conducted two analyses by comparing the transferability of a traditionally transfer-learned CNN (TL) to that of a CNN fine-tuned with an unrelated set of medical images (mammograms in this study) first and then fine-tuned a second time using TL, which we call the cross-organ, cross-modality transfer learned (XTL) network, on 1) multiple sclerosis (MS) segmentation of brain magnetic resonance (MR) images and 2) tumor malignancy classification of multi-parametric prostate MR images. We used 2133 screening mammograms and two public challenge datasets (longitudinal MS lesion segmentation and ProstateX) as intermediate and target datasets for XTL, respectively. We used two CNN architectures as basis networks for each analysis and fine-tuned it to match the target image types (volumetric) and tasks (segmentation and classification). We evaluated the XTL networks against the traditional TL networks using Dice coefficient and AUC as figure of merits for each analysis, respectively. For the segmentation test, XTL networks outperformed TL networks in terms of Dice coefficient (Dice coefficients of 0.72 vs [0.70 - 0.71] with p-value <; 0.0001 in differences). For the classification test, XTL networks (AUCs = 0.77 - 0.80) outperformed TL networks (AUC = 0.73 - 0.75). The difference in the AUCs (AUCdiff = 0.045 - 0.047) was statistically significant (p-value <; 0.03). We showed XTL using mammograms improves the network performance compared to traditional TL, despite the difference in image characteristics (x-ray vs. MRI and 2D vs. 3D) and imaging tasks (classification vs. segmentation for one of the tasks).https://ieeexplore.ieee.org/document/9262863/Transfer learningdeep learningsegmentationclassificationcross-organcross-modality |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Juhun Lee Robert M. Nishikawa |
spellingShingle |
Juhun Lee Robert M. Nishikawa Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification IEEE Access Transfer learning deep learning segmentation classification cross-organ cross-modality |
author_facet |
Juhun Lee Robert M. Nishikawa |
author_sort |
Juhun Lee |
title |
Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification |
title_short |
Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification |
title_full |
Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification |
title_fullStr |
Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification |
title_full_unstemmed |
Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification |
title_sort |
cross-organ, cross-modality transfer learning: feasibility study for segmentation and classification |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
We conducted two analyses by comparing the transferability of a traditionally transfer-learned CNN (TL) to that of a CNN fine-tuned with an unrelated set of medical images (mammograms in this study) first and then fine-tuned a second time using TL, which we call the cross-organ, cross-modality transfer learned (XTL) network, on 1) multiple sclerosis (MS) segmentation of brain magnetic resonance (MR) images and 2) tumor malignancy classification of multi-parametric prostate MR images. We used 2133 screening mammograms and two public challenge datasets (longitudinal MS lesion segmentation and ProstateX) as intermediate and target datasets for XTL, respectively. We used two CNN architectures as basis networks for each analysis and fine-tuned it to match the target image types (volumetric) and tasks (segmentation and classification). We evaluated the XTL networks against the traditional TL networks using Dice coefficient and AUC as figure of merits for each analysis, respectively. For the segmentation test, XTL networks outperformed TL networks in terms of Dice coefficient (Dice coefficients of 0.72 vs [0.70 - 0.71] with p-value <; 0.0001 in differences). For the classification test, XTL networks (AUCs = 0.77 - 0.80) outperformed TL networks (AUC = 0.73 - 0.75). The difference in the AUCs (AUCdiff = 0.045 - 0.047) was statistically significant (p-value <; 0.03). We showed XTL using mammograms improves the network performance compared to traditional TL, despite the difference in image characteristics (x-ray vs. MRI and 2D vs. 3D) and imaging tasks (classification vs. segmentation for one of the tasks). |
topic |
Transfer learning deep learning segmentation classification cross-organ cross-modality |
url |
https://ieeexplore.ieee.org/document/9262863/ |
work_keys_str_mv |
AT juhunlee crossorgancrossmodalitytransferlearningfeasibilitystudyforsegmentationandclassification AT robertmnishikawa crossorgancrossmodalitytransferlearningfeasibilitystudyforsegmentationandclassification |
_version_ |
1724181049054855168 |