Benchmarking for Bayesian Reinforcement Learning.
In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only r...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2016-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC4909278?pdf=render |
id |
doaj-e61f3e88794744b5bca9b4b6cc52a20c |
---|---|
record_format |
Article |
spelling |
doaj-e61f3e88794744b5bca9b4b6cc52a20c2020-11-25T02:29:40ZengPublic Library of Science (PLoS)PLoS ONE1932-62032016-01-01116e015708810.1371/journal.pone.0157088Benchmarking for Bayesian Reinforcement Learning.Michael CastronovoDamien ErnstAdrien CouëtouxRaphael FonteneauIn the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.http://europepmc.org/articles/PMC4909278?pdf=render |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Michael Castronovo Damien Ernst Adrien Couëtoux Raphael Fonteneau |
spellingShingle |
Michael Castronovo Damien Ernst Adrien Couëtoux Raphael Fonteneau Benchmarking for Bayesian Reinforcement Learning. PLoS ONE |
author_facet |
Michael Castronovo Damien Ernst Adrien Couëtoux Raphael Fonteneau |
author_sort |
Michael Castronovo |
title |
Benchmarking for Bayesian Reinforcement Learning. |
title_short |
Benchmarking for Bayesian Reinforcement Learning. |
title_full |
Benchmarking for Bayesian Reinforcement Learning. |
title_fullStr |
Benchmarking for Bayesian Reinforcement Learning. |
title_full_unstemmed |
Benchmarking for Bayesian Reinforcement Learning. |
title_sort |
benchmarking for bayesian reinforcement learning. |
publisher |
Public Library of Science (PLoS) |
series |
PLoS ONE |
issn |
1932-6203 |
publishDate |
2016-01-01 |
description |
In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. |
url |
http://europepmc.org/articles/PMC4909278?pdf=render |
work_keys_str_mv |
AT michaelcastronovo benchmarkingforbayesianreinforcementlearning AT damienernst benchmarkingforbayesianreinforcementlearning AT adriencouetoux benchmarkingforbayesianreinforcementlearning AT raphaelfonteneau benchmarkingforbayesianreinforcementlearning |
_version_ |
1724831540733542400 |