On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.

As a promising research direction in recent decades, active learning allows an oracle to assign labels to typical examples for performance improvement in learning systems. Existing works mainly focus on designing criteria for screening examples of high value to be labeled in a handcrafted manner. In...

Full description

Bibliographic Details
Main Authors: Honglan Huang, Jincai Huang, Yanghe Feng, Jiarui Zhang, Zhong Liu, Qi Wang, Li Chen
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2019-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0217408
id doaj-38c11e8c6f284674b3e4c329e9955dee
record_format Article
spelling doaj-38c11e8c6f284674b3e4c329e9955dee2021-03-03T20:37:22ZengPublic Library of Science (PLoS)PLoS ONE1932-62032019-01-01146e021740810.1371/journal.pone.0217408On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.Honglan HuangJincai HuangYanghe FengJiarui ZhangZhong LiuQi WangLi ChenAs a promising research direction in recent decades, active learning allows an oracle to assign labels to typical examples for performance improvement in learning systems. Existing works mainly focus on designing criteria for screening examples of high value to be labeled in a handcrafted manner. Instead of manually developing strategies of querying the user to access labels for the desired examples, we utilized the reinforcement learning algorithm parameterized with the neural network to automatically explore query strategies in active learning when addressing stream-based one-shot classification problems. With the involvement of cross-entropy in the loss function of Q-learning, an efficient policy to decide when and where to predict or query an instance is learned through the developed framework. Compared with a former influential work, the advantages of our method are demonstrated experimentally with two image classification tasks, and it exhibited better performance, quick convergence, relatively good stability and fewer requests for labels.https://doi.org/10.1371/journal.pone.0217408
collection DOAJ
language English
format Article
sources DOAJ
author Honglan Huang
Jincai Huang
Yanghe Feng
Jiarui Zhang
Zhong Liu
Qi Wang
Li Chen
spellingShingle Honglan Huang
Jincai Huang
Yanghe Feng
Jiarui Zhang
Zhong Liu
Qi Wang
Li Chen
On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.
PLoS ONE
author_facet Honglan Huang
Jincai Huang
Yanghe Feng
Jiarui Zhang
Zhong Liu
Qi Wang
Li Chen
author_sort Honglan Huang
title On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.
title_short On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.
title_full On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.
title_fullStr On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.
title_full_unstemmed On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.
title_sort on the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2019-01-01
description As a promising research direction in recent decades, active learning allows an oracle to assign labels to typical examples for performance improvement in learning systems. Existing works mainly focus on designing criteria for screening examples of high value to be labeled in a handcrafted manner. Instead of manually developing strategies of querying the user to access labels for the desired examples, we utilized the reinforcement learning algorithm parameterized with the neural network to automatically explore query strategies in active learning when addressing stream-based one-shot classification problems. With the involvement of cross-entropy in the loss function of Q-learning, an efficient policy to decide when and where to predict or query an instance is learned through the developed framework. Compared with a former influential work, the advantages of our method are demonstrated experimentally with two image classification tasks, and it exhibited better performance, quick convergence, relatively good stability and fewer requests for labels.
url https://doi.org/10.1371/journal.pone.0217408
work_keys_str_mv AT honglanhuang ontheimprovementofreinforcementactivelearningwiththeinvolvementofcrossentropytoaddressoneshotlearningproblem
AT jincaihuang ontheimprovementofreinforcementactivelearningwiththeinvolvementofcrossentropytoaddressoneshotlearningproblem
AT yanghefeng ontheimprovementofreinforcementactivelearningwiththeinvolvementofcrossentropytoaddressoneshotlearningproblem
AT jiaruizhang ontheimprovementofreinforcementactivelearningwiththeinvolvementofcrossentropytoaddressoneshotlearningproblem
AT zhongliu ontheimprovementofreinforcementactivelearningwiththeinvolvementofcrossentropytoaddressoneshotlearningproblem
AT qiwang ontheimprovementofreinforcementactivelearningwiththeinvolvementofcrossentropytoaddressoneshotlearningproblem
AT lichen ontheimprovementofreinforcementactivelearningwiththeinvolvementofcrossentropytoaddressoneshotlearningproblem
_version_ 1714821496989286400