GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification
We present a novel loss function, namely, GO loss, for classification. Most of the existing methods, such as center loss and contrastive loss, dynamically determine the convergence direction of the sample features during the training process. By contrast, GO loss decomposes the convergence direction...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi-Wiley
2019-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2019/9206053 |
id |
doaj-4d161328c26e4150bea5edb06580768f |
---|---|
record_format |
Article |
spelling |
doaj-4d161328c26e4150bea5edb06580768f2020-11-25T01:13:26ZengHindawi-WileyComplexity1076-27871099-05262019-01-01201910.1155/2019/92060539206053GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for ClassificationMengxin Liu0Wenyuan Tao1Xiao Zhang2Yi Chen3Jie Li4Chung-Ming Own5College of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaCollege of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaDepartment of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR 999077, ChinaBeijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, ChinaCollege of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaCollege of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaWe present a novel loss function, namely, GO loss, for classification. Most of the existing methods, such as center loss and contrastive loss, dynamically determine the convergence direction of the sample features during the training process. By contrast, GO loss decomposes the convergence direction into two mutually orthogonal components, namely, tangential and radial directions, and conducts optimization on them separately. The two components theoretically affect the interclass separation and the intraclass compactness of the distribution of the sample features, respectively. Thus, separately minimizing losses on them can avoid the effects of their optimization. Accordingly, a stable convergence center can be obtained for each of them. Moreover, we assume that the two components follow Gaussian distribution, which is proved as an effective way to accurately model training features for improving the classification effects. Experiments on multiple classification benchmarks, such as MNIST, CIFAR, and ImageNet, demonstrate the effectiveness of GO loss.http://dx.doi.org/10.1155/2019/9206053 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Mengxin Liu Wenyuan Tao Xiao Zhang Yi Chen Jie Li Chung-Ming Own |
spellingShingle |
Mengxin Liu Wenyuan Tao Xiao Zhang Yi Chen Jie Li Chung-Ming Own GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification Complexity |
author_facet |
Mengxin Liu Wenyuan Tao Xiao Zhang Yi Chen Jie Li Chung-Ming Own |
author_sort |
Mengxin Liu |
title |
GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification |
title_short |
GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification |
title_full |
GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification |
title_fullStr |
GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification |
title_full_unstemmed |
GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification |
title_sort |
go loss: a gaussian distribution-based orthogonal decomposition loss for classification |
publisher |
Hindawi-Wiley |
series |
Complexity |
issn |
1076-2787 1099-0526 |
publishDate |
2019-01-01 |
description |
We present a novel loss function, namely, GO loss, for classification. Most of the existing methods, such as center loss and contrastive loss, dynamically determine the convergence direction of the sample features during the training process. By contrast, GO loss decomposes the convergence direction into two mutually orthogonal components, namely, tangential and radial directions, and conducts optimization on them separately. The two components theoretically affect the interclass separation and the intraclass compactness of the distribution of the sample features, respectively. Thus, separately minimizing losses on them can avoid the effects of their optimization. Accordingly, a stable convergence center can be obtained for each of them. Moreover, we assume that the two components follow Gaussian distribution, which is proved as an effective way to accurately model training features for improving the classification effects. Experiments on multiple classification benchmarks, such as MNIST, CIFAR, and ImageNet, demonstrate the effectiveness of GO loss. |
url |
http://dx.doi.org/10.1155/2019/9206053 |
work_keys_str_mv |
AT mengxinliu golossagaussiandistributionbasedorthogonaldecompositionlossforclassification AT wenyuantao golossagaussiandistributionbasedorthogonaldecompositionlossforclassification AT xiaozhang golossagaussiandistributionbasedorthogonaldecompositionlossforclassification AT yichen golossagaussiandistributionbasedorthogonaldecompositionlossforclassification AT jieli golossagaussiandistributionbasedorthogonaldecompositionlossforclassification AT chungmingown golossagaussiandistributionbasedorthogonaldecompositionlossforclassification |
_version_ |
1725162310866042880 |