Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation
There are many existing models that are capable of changing hair color or changing facial expressions. These models are typically implemented as deep neural networks that require a large number of computations in order to perform the transformations. This is why it is challenging to deploy on a mobi...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8667297/ |
id |
doaj-b6e6260f9168483990256f0abb61160b |
---|---|
record_format |
Article |
spelling |
doaj-b6e6260f9168483990256f0abb61160b2021-03-29T22:23:19ZengIEEEIEEE Access2169-35362019-01-017364003641210.1109/ACCESS.2019.29051478667297Faster, Smaller, and Simpler Model for Multiple Facial Attributes TransformationJonathan Hans Soeseno0Daniel Stanley Tan1Wen-Yin Chen2Kai-Lung Hua3https://orcid.org/0000-0002-7735-243XDepartment of Computer Science and Information Technology, National Taiwan University of Science and Technology, Taipei, TaiwanDepartment of Computer Science and Information Technology, National Taiwan University of Science and Technology, Taipei, TaiwanDepartment of Arts and Design, National Taipei University of Education, Taipei, TaiwanDepartment of Computer Science and Information Technology, National Taiwan University of Science and Technology, Taipei, TaiwanThere are many existing models that are capable of changing hair color or changing facial expressions. These models are typically implemented as deep neural networks that require a large number of computations in order to perform the transformations. This is why it is challenging to deploy on a mobile platform. The usual setup requires an internet connection, where the processing can be done on a server. However, this limits the application's accessibility and diminishes the user experience for consumers with low internet bandwidth. In this paper, we develop a model that can simultaneously transform multiple facial attributes with lower memory footprint and fewer number of computations, making it easier to be processed on a mobile phone. Moreover, our encoder-decoder design allows us to encode an image only once and transform multiple times, making it faster as compared to the previous methods where the whole image has to be processed repeatedly for every attribute transformation. We show in our experiments that our results are comparable to the state-of-the-art models but with $4\times $ fewer parameters and $3\times $ faster execution time.https://ieeexplore.ieee.org/document/8667297/Facial attribute transformationsgenerative adversarial networksimage translation |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Jonathan Hans Soeseno Daniel Stanley Tan Wen-Yin Chen Kai-Lung Hua |
spellingShingle |
Jonathan Hans Soeseno Daniel Stanley Tan Wen-Yin Chen Kai-Lung Hua Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation IEEE Access Facial attribute transformations generative adversarial networks image translation |
author_facet |
Jonathan Hans Soeseno Daniel Stanley Tan Wen-Yin Chen Kai-Lung Hua |
author_sort |
Jonathan Hans Soeseno |
title |
Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation |
title_short |
Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation |
title_full |
Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation |
title_fullStr |
Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation |
title_full_unstemmed |
Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation |
title_sort |
faster, smaller, and simpler model for multiple facial attributes transformation |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2019-01-01 |
description |
There are many existing models that are capable of changing hair color or changing facial expressions. These models are typically implemented as deep neural networks that require a large number of computations in order to perform the transformations. This is why it is challenging to deploy on a mobile platform. The usual setup requires an internet connection, where the processing can be done on a server. However, this limits the application's accessibility and diminishes the user experience for consumers with low internet bandwidth. In this paper, we develop a model that can simultaneously transform multiple facial attributes with lower memory footprint and fewer number of computations, making it easier to be processed on a mobile phone. Moreover, our encoder-decoder design allows us to encode an image only once and transform multiple times, making it faster as compared to the previous methods where the whole image has to be processed repeatedly for every attribute transformation. We show in our experiments that our results are comparable to the state-of-the-art models but with $4\times $ fewer parameters and $3\times $ faster execution time. |
topic |
Facial attribute transformations generative adversarial networks image translation |
url |
https://ieeexplore.ieee.org/document/8667297/ |
work_keys_str_mv |
AT jonathanhanssoeseno fastersmallerandsimplermodelformultiplefacialattributestransformation AT danielstanleytan fastersmallerandsimplermodelformultiplefacialattributestransformation AT wenyinchen fastersmallerandsimplermodelformultiplefacialattributestransformation AT kailunghua fastersmallerandsimplermodelformultiplefacialattributestransformation |
_version_ |
1724191771401912320 |