Label Distribution Learning by Regularized Sample Self-Representation
Multilabel learning that focuses on an instance of the corresponding related or unrelated label can solve many ambiguity problems. Label distribution learning (LDL) reflects the importance of the related label to an instance and offers a more general learning framework than multilabel learning. Howe...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2018-01-01
|
Series: | Mathematical Problems in Engineering |
Online Access: | http://dx.doi.org/10.1155/2018/1090565 |
id |
doaj-955f1616d3a240df9adc820178716e13 |
---|---|
record_format |
Article |
spelling |
doaj-955f1616d3a240df9adc820178716e132020-11-24T20:58:01ZengHindawi LimitedMathematical Problems in Engineering1024-123X1563-51472018-01-01201810.1155/2018/10905651090565Label Distribution Learning by Regularized Sample Self-RepresentationWenyuan Yang0Chan Li1Hong Zhao2Lab of Granular Computing, Minnan Normal University, Zhangzhou, Fujian 363000, ChinaLab of Granular Computing, Minnan Normal University, Zhangzhou, Fujian 363000, ChinaLab of Granular Computing, Minnan Normal University, Zhangzhou, Fujian 363000, ChinaMultilabel learning that focuses on an instance of the corresponding related or unrelated label can solve many ambiguity problems. Label distribution learning (LDL) reflects the importance of the related label to an instance and offers a more general learning framework than multilabel learning. However, the current LDL algorithms ignore the linear relationship between the distribution of labels and the feature. In this paper, we propose a regularized sample self-representation (RSSR) approach for LDL. First, the label distribution problem is formalized by sample self-representation, whereby each label distribution can be represented as a linear combination of its relevant features. Second, the LDL problem is solved by L2-norm least-squares and L2,1-norm least-squares methods to reduce the effects of outliers and overfitting. The corresponding algorithms are named RSSR-LDL2 and RSSR-LDL21. Third, the proposed algorithms are compared with four state-of-the-art LDL algorithms using 12 public datasets and five evaluation metrics. The results demonstrate that the proposed algorithms can effectively identify the predictive label distribution and exhibit good performance in terms of distance and similarity evaluations.http://dx.doi.org/10.1155/2018/1090565 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Wenyuan Yang Chan Li Hong Zhao |
spellingShingle |
Wenyuan Yang Chan Li Hong Zhao Label Distribution Learning by Regularized Sample Self-Representation Mathematical Problems in Engineering |
author_facet |
Wenyuan Yang Chan Li Hong Zhao |
author_sort |
Wenyuan Yang |
title |
Label Distribution Learning by Regularized Sample Self-Representation |
title_short |
Label Distribution Learning by Regularized Sample Self-Representation |
title_full |
Label Distribution Learning by Regularized Sample Self-Representation |
title_fullStr |
Label Distribution Learning by Regularized Sample Self-Representation |
title_full_unstemmed |
Label Distribution Learning by Regularized Sample Self-Representation |
title_sort |
label distribution learning by regularized sample self-representation |
publisher |
Hindawi Limited |
series |
Mathematical Problems in Engineering |
issn |
1024-123X 1563-5147 |
publishDate |
2018-01-01 |
description |
Multilabel learning that focuses on an instance of the corresponding related or unrelated label can solve many ambiguity problems. Label distribution learning (LDL) reflects the importance of the related label to an instance and offers a more general learning framework than multilabel learning. However, the current LDL algorithms ignore the linear relationship between the distribution of labels and the feature. In this paper, we propose a regularized sample self-representation (RSSR) approach for LDL. First, the label distribution problem is formalized by sample self-representation, whereby each label distribution can be represented as a linear combination of its relevant features. Second, the LDL problem is solved by L2-norm least-squares and L2,1-norm least-squares methods to reduce the effects of outliers and overfitting. The corresponding algorithms are named RSSR-LDL2 and RSSR-LDL21. Third, the proposed algorithms are compared with four state-of-the-art LDL algorithms using 12 public datasets and five evaluation metrics. The results demonstrate that the proposed algorithms can effectively identify the predictive label distribution and exhibit good performance in terms of distance and similarity evaluations. |
url |
http://dx.doi.org/10.1155/2018/1090565 |
work_keys_str_mv |
AT wenyuanyang labeldistributionlearningbyregularizedsampleselfrepresentation AT chanli labeldistributionlearningbyregularizedsampleselfrepresentation AT hongzhao labeldistributionlearningbyregularizedsampleselfrepresentation |
_version_ |
1716786698029891584 |