Probabilistic Latent Variable Model for Learning Data Representation

博士 === 國立中央大學 === 資訊工程學系 === 106 === Probabilistic framework has emerged as a powerful technique for representation learning. This dissertation proposes probabilistic latent variable model-based representation learning methods that involve both discrete and continuous latent spaces. For a discrete l...

Full description

Bibliographic Details
Main Authors: Sih-Huei Chen, 陳思卉
Other Authors: Jia-Ching Wang
Format: Others
Language:en_US
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/4xt8z4
id ndltd-TW-106NCU05392120
record_format oai_dc
spelling ndltd-TW-106NCU053921202019-10-31T05:22:24Z http://ndltd.ncl.edu.tw/handle/4xt8z4 Probabilistic Latent Variable Model for Learning Data Representation 機率型潛在變數模型於資料表示法學習 Sih-Huei Chen 陳思卉 博士 國立中央大學 資訊工程學系 106 Probabilistic framework has emerged as a powerful technique for representation learning. This dissertation proposes probabilistic latent variable model-based representation learning methods that involve both discrete and continuous latent spaces. For a discrete latent space, a hierarchical representation that is based on the Gaussian hierarchical latent Dirichlet allocation (G-hLDA) is proposed for capturing the latent characteristics of low-level features. Representation is learned by constructing an infinitely deep and branching tree-structured mixture model, which effectively models the subtle differences among classes. For a continuous latent space, a novel complex-valued latent variable model, named the complex-valued Gaussian process latent variable model (CGPLVM), is developed for discovering a compressed complex-valued representation of complex-valued data. The key concept of CGPLVM is that complex-valued data is approximated by a low-dimensional complex-valued latent representation through a function that is drawn from a complex Gaussian process. Additionally, we attempt to preserve both global and local data structures while promoting discrimination. A new objective function that incorporates a locality-preserving and a discriminative term for complex-valued data is presented. Then, a deep collaborative learning framework that is based on a variational autoencoder (VAE) and a Gaussian process (GP) is proposed to represent multimedia data with greater discriminative power than previously achieved. A Gaussian process classifier is incorporated into the VAE to guide a VAE-based representation, which distinguishes variations of data among classes and achieves the dual goals of reconstruction and classification. The developed methods are evaluated using multimedia data. The experimental results demonstrate the superior performances of the proposed methods, especially for situations with only a small number of training data. Jia-Ching Wang 王家慶 2017 學位論文 ; thesis 148 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 博士 === 國立中央大學 === 資訊工程學系 === 106 === Probabilistic framework has emerged as a powerful technique for representation learning. This dissertation proposes probabilistic latent variable model-based representation learning methods that involve both discrete and continuous latent spaces. For a discrete latent space, a hierarchical representation that is based on the Gaussian hierarchical latent Dirichlet allocation (G-hLDA) is proposed for capturing the latent characteristics of low-level features. Representation is learned by constructing an infinitely deep and branching tree-structured mixture model, which effectively models the subtle differences among classes. For a continuous latent space, a novel complex-valued latent variable model, named the complex-valued Gaussian process latent variable model (CGPLVM), is developed for discovering a compressed complex-valued representation of complex-valued data. The key concept of CGPLVM is that complex-valued data is approximated by a low-dimensional complex-valued latent representation through a function that is drawn from a complex Gaussian process. Additionally, we attempt to preserve both global and local data structures while promoting discrimination. A new objective function that incorporates a locality-preserving and a discriminative term for complex-valued data is presented. Then, a deep collaborative learning framework that is based on a variational autoencoder (VAE) and a Gaussian process (GP) is proposed to represent multimedia data with greater discriminative power than previously achieved. A Gaussian process classifier is incorporated into the VAE to guide a VAE-based representation, which distinguishes variations of data among classes and achieves the dual goals of reconstruction and classification. The developed methods are evaluated using multimedia data. The experimental results demonstrate the superior performances of the proposed methods, especially for situations with only a small number of training data.
author2 Jia-Ching Wang
author_facet Jia-Ching Wang
Sih-Huei Chen
陳思卉
author Sih-Huei Chen
陳思卉
spellingShingle Sih-Huei Chen
陳思卉
Probabilistic Latent Variable Model for Learning Data Representation
author_sort Sih-Huei Chen
title Probabilistic Latent Variable Model for Learning Data Representation
title_short Probabilistic Latent Variable Model for Learning Data Representation
title_full Probabilistic Latent Variable Model for Learning Data Representation
title_fullStr Probabilistic Latent Variable Model for Learning Data Representation
title_full_unstemmed Probabilistic Latent Variable Model for Learning Data Representation
title_sort probabilistic latent variable model for learning data representation
publishDate 2017
url http://ndltd.ncl.edu.tw/handle/4xt8z4
work_keys_str_mv AT sihhueichen probabilisticlatentvariablemodelforlearningdatarepresentation
AT chénsīhuì probabilisticlatentvariablemodelforlearningdatarepresentation
AT sihhueichen jīlǜxíngqiánzàibiànshùmóxíngyúzīliàobiǎoshìfǎxuéxí
AT chénsīhuì jīlǜxíngqiánzàibiànshùmóxíngyúzīliàobiǎoshìfǎxuéxí
_version_ 1719284360663793664