Reinforcement learning for solution updating in Artificial Bee Colony.

In the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update proces...

Full description

Bibliographic Details
Main Authors: Suthida Fairee, Santitham Prom-On, Booncharoen Sirinaovakul
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2018-01-01
Series:PLoS ONE
Online Access:http://europepmc.org/articles/PMC6049945?pdf=render
id doaj-4a94406e6236465898b3c1b46926bf9c
record_format Article
spelling doaj-4a94406e6236465898b3c1b46926bf9c2020-11-25T00:01:49ZengPublic Library of Science (PLoS)PLoS ONE1932-62032018-01-01137e020073810.1371/journal.pone.0200738Reinforcement learning for solution updating in Artificial Bee Colony.Suthida FaireeSantitham Prom-OnBooncharoen SirinaovakulIn the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update process can cause the solution quality and convergence speed drop. This paper proposes a new algorithm, using reinforcement learning for solution updating in ABC algorithm, called R-ABC. After updating a solution by an employed bee, the new solution results in positive or negative reinforcement applied to the solution dimensions in the onlooker bee phase. Positive reinforcement is given when the candidate solution from the employed bee phase provides a better fitness value. The more often a dimension provides a better fitness value when changed, the higher the value of update becomes in the onlooker bee phase. Conversely, negative reinforcement is given when the candidate solution does not provide a better fitness value. The performance of the proposed algorithm is assessed on eight basic numerical benchmark functions in four categories with 100, 500, 700, and 900 dimensions, seven CEC2005's shifted functions with 100, 500, 700, and 900 dimensions, and six CEC2014's hybrid functions with 100 dimensions. The results show that the proposed algorithm provides solutions which are significantly better than all other algorithms for all tested dimensions on basic benchmark functions. The number of solutions provided by the R-ABC algorithm which are significantly better than those of other algorithms increases when the number of dimensions increases on the CEC2005's shifted functions. The R-ABC algorithm is at least comparable to the state-of-the-art ABC variants on the CEC2014's hybrid functions.http://europepmc.org/articles/PMC6049945?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Suthida Fairee
Santitham Prom-On
Booncharoen Sirinaovakul
spellingShingle Suthida Fairee
Santitham Prom-On
Booncharoen Sirinaovakul
Reinforcement learning for solution updating in Artificial Bee Colony.
PLoS ONE
author_facet Suthida Fairee
Santitham Prom-On
Booncharoen Sirinaovakul
author_sort Suthida Fairee
title Reinforcement learning for solution updating in Artificial Bee Colony.
title_short Reinforcement learning for solution updating in Artificial Bee Colony.
title_full Reinforcement learning for solution updating in Artificial Bee Colony.
title_fullStr Reinforcement learning for solution updating in Artificial Bee Colony.
title_full_unstemmed Reinforcement learning for solution updating in Artificial Bee Colony.
title_sort reinforcement learning for solution updating in artificial bee colony.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2018-01-01
description In the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update process can cause the solution quality and convergence speed drop. This paper proposes a new algorithm, using reinforcement learning for solution updating in ABC algorithm, called R-ABC. After updating a solution by an employed bee, the new solution results in positive or negative reinforcement applied to the solution dimensions in the onlooker bee phase. Positive reinforcement is given when the candidate solution from the employed bee phase provides a better fitness value. The more often a dimension provides a better fitness value when changed, the higher the value of update becomes in the onlooker bee phase. Conversely, negative reinforcement is given when the candidate solution does not provide a better fitness value. The performance of the proposed algorithm is assessed on eight basic numerical benchmark functions in four categories with 100, 500, 700, and 900 dimensions, seven CEC2005's shifted functions with 100, 500, 700, and 900 dimensions, and six CEC2014's hybrid functions with 100 dimensions. The results show that the proposed algorithm provides solutions which are significantly better than all other algorithms for all tested dimensions on basic benchmark functions. The number of solutions provided by the R-ABC algorithm which are significantly better than those of other algorithms increases when the number of dimensions increases on the CEC2005's shifted functions. The R-ABC algorithm is at least comparable to the state-of-the-art ABC variants on the CEC2014's hybrid functions.
url http://europepmc.org/articles/PMC6049945?pdf=render
work_keys_str_mv AT suthidafairee reinforcementlearningforsolutionupdatinginartificialbeecolony
AT santithampromon reinforcementlearningforsolutionupdatinginartificialbeecolony
AT booncharoensirinaovakul reinforcementlearningforsolutionupdatinginartificialbeecolony
_version_ 1725440162524037120