Joint Learning of Generative Translator and Classifier for Visually Similar Classes
In this paper, we propose a Generative Translation Classification Network (GTCN) for improving visual classification accuracy in settings where classes are visually similar and data is scarce. For this purpose, we propose joint learning from a scratch to train a classifier and a generative stochasti...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9279318/ |
id |
doaj-6f19dc75e3314f80a59fe47b268f06f8 |
---|---|
record_format |
Article |
spelling |
doaj-6f19dc75e3314f80a59fe47b268f06f82021-03-30T03:29:37ZengIEEEIEEE Access2169-35362020-01-01821916021917310.1109/ACCESS.2020.30423029279318Joint Learning of Generative Translator and Classifier for Visually Similar ClassesByungin Yoo0https://orcid.org/0000-0002-4065-7512Tristan Sylvain1https://orcid.org/0000-0001-5390-4036Yoshua Bengio2Junmo Kim3https://orcid.org/0000-0002-7174-7932School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South KoreaMontreal Institute for Learning Algorithms, Montreal, QC, CanadaMontreal Institute for Learning Algorithms, Montreal, QC, CanadaSchool of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South KoreaIn this paper, we propose a Generative Translation Classification Network (GTCN) for improving visual classification accuracy in settings where classes are visually similar and data is scarce. For this purpose, we propose joint learning from a scratch to train a classifier and a generative stochastic translation network end-to-end. The translation network is used to perform on-line data augmentation across classes, whereas previous works have mostly involved domain adaptation. To help the model further benefit from this data-augmentation, we introduce an adaptive fade-in loss and a quadruplet loss. We perform experiments on multiple datasets to demonstrate the proposed method's performance in varied settings. Of particular interest, training on 40% of the dataset is enough for our model to surpass the performance of baselines trained on the full dataset. When our architecture is trained on the full dataset, we achieve comparable performance with state-of-the-art methods despite using a light-weight architecture.https://ieeexplore.ieee.org/document/9279318/Artificial neural networksfeature extractionimage classificationimage generationpattern analysissemisupervised learning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Byungin Yoo Tristan Sylvain Yoshua Bengio Junmo Kim |
spellingShingle |
Byungin Yoo Tristan Sylvain Yoshua Bengio Junmo Kim Joint Learning of Generative Translator and Classifier for Visually Similar Classes IEEE Access Artificial neural networks feature extraction image classification image generation pattern analysis semisupervised learning |
author_facet |
Byungin Yoo Tristan Sylvain Yoshua Bengio Junmo Kim |
author_sort |
Byungin Yoo |
title |
Joint Learning of Generative Translator and Classifier for Visually Similar Classes |
title_short |
Joint Learning of Generative Translator and Classifier for Visually Similar Classes |
title_full |
Joint Learning of Generative Translator and Classifier for Visually Similar Classes |
title_fullStr |
Joint Learning of Generative Translator and Classifier for Visually Similar Classes |
title_full_unstemmed |
Joint Learning of Generative Translator and Classifier for Visually Similar Classes |
title_sort |
joint learning of generative translator and classifier for visually similar classes |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
In this paper, we propose a Generative Translation Classification Network (GTCN) for improving visual classification accuracy in settings where classes are visually similar and data is scarce. For this purpose, we propose joint learning from a scratch to train a classifier and a generative stochastic translation network end-to-end. The translation network is used to perform on-line data augmentation across classes, whereas previous works have mostly involved domain adaptation. To help the model further benefit from this data-augmentation, we introduce an adaptive fade-in loss and a quadruplet loss. We perform experiments on multiple datasets to demonstrate the proposed method's performance in varied settings. Of particular interest, training on 40% of the dataset is enough for our model to surpass the performance of baselines trained on the full dataset. When our architecture is trained on the full dataset, we achieve comparable performance with state-of-the-art methods despite using a light-weight architecture. |
topic |
Artificial neural networks feature extraction image classification image generation pattern analysis semisupervised learning |
url |
https://ieeexplore.ieee.org/document/9279318/ |
work_keys_str_mv |
AT byunginyoo jointlearningofgenerativetranslatorandclassifierforvisuallysimilarclasses AT tristansylvain jointlearningofgenerativetranslatorandclassifierforvisuallysimilarclasses AT yoshuabengio jointlearningofgenerativetranslatorandclassifierforvisuallysimilarclasses AT junmokim jointlearningofgenerativetranslatorandclassifierforvisuallysimilarclasses |
_version_ |
1724183416724783104 |