TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation

The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be...

Full description

Bibliographic Details
Main Authors: Qingyun Li, Zhibin Yu, Yubo Wang, Haiyong Zheng
Format: Article
Language:English
Published: MDPI AG 2020-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/15/4203
Description
Summary:The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional <inline-formula><math display="inline"><semantics><msub><mi>L</mi><mn>1</mn></msub></semantics></math></inline-formula> loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.
ISSN:1424-8220