TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation
The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-07-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/20/15/4203 |
id |
doaj-04019b33d7ac4738b526a1a070269b40 |
---|---|
record_format |
Article |
spelling |
doaj-04019b33d7ac4738b526a1a070269b402020-11-25T03:29:02ZengMDPI AGSensors1424-82202020-07-01204203420310.3390/s20154203TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor SegmentationQingyun Li0Zhibin Yu1Yubo Wang2Haiyong Zheng3College of Information Science and Engineering, Ocean University of China, Qingdao 266100, ChinaCollege of Information Science and Engineering, Ocean University of China, Qingdao 266100, ChinaSchool of Life Science and Technology, Xidian University, Xi’an 710071, ChinaCollege of Information Science and Engineering, Ocean University of China, Qingdao 266100, ChinaThe high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional <inline-formula><math display="inline"><semantics><msub><mi>L</mi><mn>1</mn></msub></semantics></math></inline-formula> loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.https://www.mdpi.com/1424-8220/20/15/4203medical image augmentationgenerative adversarial networkbrain tumor segmentationimage-to-image |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Qingyun Li Zhibin Yu Yubo Wang Haiyong Zheng |
spellingShingle |
Qingyun Li Zhibin Yu Yubo Wang Haiyong Zheng TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation Sensors medical image augmentation generative adversarial network brain tumor segmentation image-to-image |
author_facet |
Qingyun Li Zhibin Yu Yubo Wang Haiyong Zheng |
author_sort |
Qingyun Li |
title |
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation |
title_short |
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation |
title_full |
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation |
title_fullStr |
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation |
title_full_unstemmed |
TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation |
title_sort |
tumorgan: a multi-modal data augmentation framework for brain tumor segmentation |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2020-07-01 |
description |
The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional <inline-formula><math display="inline"><semantics><msub><mi>L</mi><mn>1</mn></msub></semantics></math></inline-formula> loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training. |
topic |
medical image augmentation generative adversarial network brain tumor segmentation image-to-image |
url |
https://www.mdpi.com/1424-8220/20/15/4203 |
work_keys_str_mv |
AT qingyunli tumorganamultimodaldataaugmentationframeworkforbraintumorsegmentation AT zhibinyu tumorganamultimodaldataaugmentationframeworkforbraintumorsegmentation AT yubowang tumorganamultimodaldataaugmentationframeworkforbraintumorsegmentation AT haiyongzheng tumorganamultimodaldataaugmentationframeworkforbraintumorsegmentation |
_version_ |
1724581067112841216 |