The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives

abstract: The rapid development in acquiring multimodal neuroimaging data provides opportunities to systematically characterize human brain structures and functions. For example, in the brain magnetic resonance imaging (MRI), a typical non-invasive imaging technique, different acquisition sequences...

Full description

Bibliographic Details
Other Authors: Zhang, Wen (Author)
Format: Doctoral Thesis
Language:English
Published: 2020
Subjects:
Online Access:http://hdl.handle.net/2286/R.I.62817
id ndltd-asu.edu-item-62817
record_format oai_dc
spelling ndltd-asu.edu-item-628172020-12-09T05:00:43Z The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives abstract: The rapid development in acquiring multimodal neuroimaging data provides opportunities to systematically characterize human brain structures and functions. For example, in the brain magnetic resonance imaging (MRI), a typical non-invasive imaging technique, different acquisition sequences (modalities) lead to the different descriptions of brain functional activities, or anatomical biomarkers. Nowadays, in addition to the traditional voxel-level analysis of images, there is a trend to process and investigate the cross-modality relationship in a high dimensional level of images, e.g. surfaces and networks. In this study, I aim to achieve multimodal brain image fusion by referring to some intrinsic properties of data, e.g. geometry of embedding structures where the commonly used image features reside. Since the image features investigated in this study share an identical embedding space, i.e. either defined on a brain surface or brain atlas, where a graph structure is easy to define, it is straightforward to consider the mathematically meaningful properties of the shared structures from the geometry perspective. I first introduce the background of multimodal fusion of brain image data and insights of geometric properties playing a potential role to link different modalities. Then, several proposed computational frameworks either using the solid and efficient geometric algorithms or current geometric deep learning models are be fully discussed. I show how these designed frameworks deal with distinct geometric properties respectively, and their applications in the real healthcare scenarios, e.g. to enhanced detections of fetal brain diseases or abnormal brain development. Dissertation/Thesis Zhang, Wen (Author) Wang, Yalin (Advisor) Liu, Huan (Committee member) Li, Baoxin (Committee member) Braden, B. Blair (Committee member) Arizona State University (Publisher) Computer science Brain Imaging Geometry Multimodal fusion eng 111 pages Doctoral Dissertation Computer Science 2020 Doctoral Dissertation http://hdl.handle.net/2286/R.I.62817 http://rightsstatements.org/vocab/InC/1.0/ 2020
collection NDLTD
language English
format Doctoral Thesis
sources NDLTD
topic Computer science
Brain Imaging
Geometry
Multimodal fusion
spellingShingle Computer science
Brain Imaging
Geometry
Multimodal fusion
The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives
description abstract: The rapid development in acquiring multimodal neuroimaging data provides opportunities to systematically characterize human brain structures and functions. For example, in the brain magnetic resonance imaging (MRI), a typical non-invasive imaging technique, different acquisition sequences (modalities) lead to the different descriptions of brain functional activities, or anatomical biomarkers. Nowadays, in addition to the traditional voxel-level analysis of images, there is a trend to process and investigate the cross-modality relationship in a high dimensional level of images, e.g. surfaces and networks. In this study, I aim to achieve multimodal brain image fusion by referring to some intrinsic properties of data, e.g. geometry of embedding structures where the commonly used image features reside. Since the image features investigated in this study share an identical embedding space, i.e. either defined on a brain surface or brain atlas, where a graph structure is easy to define, it is straightforward to consider the mathematically meaningful properties of the shared structures from the geometry perspective. I first introduce the background of multimodal fusion of brain image data and insights of geometric properties playing a potential role to link different modalities. Then, several proposed computational frameworks either using the solid and efficient geometric algorithms or current geometric deep learning models are be fully discussed. I show how these designed frameworks deal with distinct geometric properties respectively, and their applications in the real healthcare scenarios, e.g. to enhanced detections of fetal brain diseases or abnormal brain development. === Dissertation/Thesis === Doctoral Dissertation Computer Science 2020
author2 Zhang, Wen (Author)
author_facet Zhang, Wen (Author)
title The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives
title_short The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives
title_full The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives
title_fullStr The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives
title_full_unstemmed The Fusion of Multimodal Brain Imaging Data from Geometry Perspectives
title_sort fusion of multimodal brain imaging data from geometry perspectives
publishDate 2020
url http://hdl.handle.net/2286/R.I.62817
_version_ 1719368824283725824