Variations in Variational Autoencoders - A Comparative Evaluation
Variational Auto-Encoders (VAEs) are deep latent space generative models which have been immensely successful in many applications such as image generation, image captioning, protein design, mutation prediction, and language models among others. The fundamental idea in VAEs is to learn the distribut...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9171997/ |
id |
doaj-c966e4197f264d8bb12056da04ce2469 |
---|---|
record_format |
Article |
spelling |
doaj-c966e4197f264d8bb12056da04ce24692021-03-30T01:52:23ZengIEEEIEEE Access2169-35362020-01-01815365115367010.1109/ACCESS.2020.30181519171997Variations in Variational Autoencoders - A Comparative EvaluationRuoqi Wei0https://orcid.org/0000-0002-1771-542XCesar Garcia1Ahmed El-Sayed2https://orcid.org/0000-0003-4746-9095Viyaleta Peterson3Ausif Mahmood4https://orcid.org/0000-0002-8991-4268Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT~, USADepartment of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT~, USADepartment of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT~, USADepartment of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT~, USADepartment of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT~, USAVariational Auto-Encoders (VAEs) are deep latent space generative models which have been immensely successful in many applications such as image generation, image captioning, protein design, mutation prediction, and language models among others. The fundamental idea in VAEs is to learn the distribution of data in such a way that new meaningful data can be generated from the encoded distribution. This concept has led to tremendous research and variations in the design of VAEs in the last few years creating a field of its own, referred to as unsupervised representation learning. This paper provides a much-needed comprehensive evaluation of the variations of the VAEs based on their end goals and resulting architectures. It further provides intuition as well as mathematical formulation and quantitative results of each popular variation, presents a concise comparison of these variations, and concludes with challenges and future opportunities for research in VAEs.https://ieeexplore.ieee.org/document/9171997/Deep learningvariational autoencoders (VAEs)data representationgenerative modelsunsupervised learningrepresentation learning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ruoqi Wei Cesar Garcia Ahmed El-Sayed Viyaleta Peterson Ausif Mahmood |
spellingShingle |
Ruoqi Wei Cesar Garcia Ahmed El-Sayed Viyaleta Peterson Ausif Mahmood Variations in Variational Autoencoders - A Comparative Evaluation IEEE Access Deep learning variational autoencoders (VAEs) data representation generative models unsupervised learning representation learning |
author_facet |
Ruoqi Wei Cesar Garcia Ahmed El-Sayed Viyaleta Peterson Ausif Mahmood |
author_sort |
Ruoqi Wei |
title |
Variations in Variational Autoencoders - A Comparative Evaluation |
title_short |
Variations in Variational Autoencoders - A Comparative Evaluation |
title_full |
Variations in Variational Autoencoders - A Comparative Evaluation |
title_fullStr |
Variations in Variational Autoencoders - A Comparative Evaluation |
title_full_unstemmed |
Variations in Variational Autoencoders - A Comparative Evaluation |
title_sort |
variations in variational autoencoders - a comparative evaluation |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
Variational Auto-Encoders (VAEs) are deep latent space generative models which have been immensely successful in many applications such as image generation, image captioning, protein design, mutation prediction, and language models among others. The fundamental idea in VAEs is to learn the distribution of data in such a way that new meaningful data can be generated from the encoded distribution. This concept has led to tremendous research and variations in the design of VAEs in the last few years creating a field of its own, referred to as unsupervised representation learning. This paper provides a much-needed comprehensive evaluation of the variations of the VAEs based on their end goals and resulting architectures. It further provides intuition as well as mathematical formulation and quantitative results of each popular variation, presents a concise comparison of these variations, and concludes with challenges and future opportunities for research in VAEs. |
topic |
Deep learning variational autoencoders (VAEs) data representation generative models unsupervised learning representation learning |
url |
https://ieeexplore.ieee.org/document/9171997/ |
work_keys_str_mv |
AT ruoqiwei variationsinvariationalautoencodersacomparativeevaluation AT cesargarcia variationsinvariationalautoencodersacomparativeevaluation AT ahmedelsayed variationsinvariationalautoencodersacomparativeevaluation AT viyaletapeterson variationsinvariationalautoencodersacomparativeevaluation AT ausifmahmood variationsinvariationalautoencodersacomparativeevaluation |
_version_ |
1724186258807193600 |