Generative models: an upcoming innovation in musculoskeletal radiology? A preliminary test in spine imaging

Abstract Background Deep learning is a ground-breaking technology that is revolutionising many research and industrial fields. Generative models are recently gaining interest. Here, we investigate their potential, namely conditional generative adversarial networks, in the field of magnetic resonance...

Full description

Bibliographic Details
Main Authors: Fabio Galbusera, Tito Bassani, Gloria Casaroli, Salvatore Gitto, Edoardo Zanchetta, Francesco Costa, Luca Maria Sconfienza
Format: Article
Language:English
Published: SpringerOpen 2018-10-01
Series:European Radiology Experimental
Subjects:
Online Access:http://link.springer.com/article/10.1186/s41747-018-0060-7
Description
Summary:Abstract Background Deep learning is a ground-breaking technology that is revolutionising many research and industrial fields. Generative models are recently gaining interest. Here, we investigate their potential, namely conditional generative adversarial networks, in the field of magnetic resonance imaging (MRI) of the spine, by performing clinically relevant benchmark cases. Methods First, the enhancement of the resolution of T2-weighted (T2W) images (super-resolution) was tested. Then, automated image-to-image translation was tested in the following tasks: (1) from T1-weighted to T2W images of the lumbar spine and (2) vice versa; (3) from T2W to short time inversion-recovery (STIR) images; (4) from T2W to turbo inversion recovery magnitude (TIRM) images; (5) from sagittal standing x-ray projections to T2W images. Clinical and quantitative assessments of the outputs by means of image quality metrics were performed. The training of the models was performed on MRI and x-ray images from 989 patients. Results The performance of the models was generally positive and promising, but with several limitations. The number of disc protrusions or herniations showed good concordance (κ = 0.691) between native and super-resolution images. Moderate-to-excellent concordance was found when translating T2W to STIR and TIRM images (κ ≥ 0.842 regarding disc degeneration), while the agreement was poor when translating x-ray to T2W images. Conclusions Conditional generative adversarial networks are able to generate perceptually convincing synthetic images of the spine in super-resolution and image-to-image translation tasks. Taking into account the limitations of the study, deep learning-based generative methods showed the potential to be an upcoming innovation in musculoskeletal radiology.
ISSN:2509-9280