Learnable Prior Regularized Autoencoder
碩士 === 國立交通大學 === 資訊學院資訊學程 === 106 === Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative net...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2018
|
Online Access: | http://ndltd.ncl.edu.tw/handle/dy9h7d |
id |
ndltd-TW-106NCTU5392010 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-106NCTU53920102019-11-28T05:22:26Z http://ndltd.ncl.edu.tw/handle/dy9h7d Learnable Prior Regularized Autoencoder 學習式先驗正規化自編碼器 Ko, Wei-Jan 柯維然 碩士 國立交通大學 資訊學院資訊學程 106 Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for Prior Regularied Autoencoder. We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the pro- posed model can generate better image quality and learn better disentangled rep- resentations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task. Sun, Chuen-Tsai 孫春在 2018 學位論文 ; thesis 45 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立交通大學 === 資訊學院資訊學程 === 106 === Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for Prior Regularied Autoencoder. We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the pro- posed model can generate better image quality and learn better disentangled rep- resentations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.
|
author2 |
Sun, Chuen-Tsai |
author_facet |
Sun, Chuen-Tsai Ko, Wei-Jan 柯維然 |
author |
Ko, Wei-Jan 柯維然 |
spellingShingle |
Ko, Wei-Jan 柯維然 Learnable Prior Regularized Autoencoder |
author_sort |
Ko, Wei-Jan |
title |
Learnable Prior Regularized Autoencoder |
title_short |
Learnable Prior Regularized Autoencoder |
title_full |
Learnable Prior Regularized Autoencoder |
title_fullStr |
Learnable Prior Regularized Autoencoder |
title_full_unstemmed |
Learnable Prior Regularized Autoencoder |
title_sort |
learnable prior regularized autoencoder |
publishDate |
2018 |
url |
http://ndltd.ncl.edu.tw/handle/dy9h7d |
work_keys_str_mv |
AT koweijan learnablepriorregularizedautoencoder AT kēwéirán learnablepriorregularizedautoencoder AT koweijan xuéxíshìxiānyànzhèngguīhuàzìbiānmǎqì AT kēwéirán xuéxíshìxiānyànzhèngguīhuàzìbiānmǎqì |
_version_ |
1719297797313789952 |