Optimal planning with approximate model-based reinforcement learning
Model-based reinforcement learning methods make efficient use of samples by building a model of the environment and planning with it. Compared to model-free methods, they usually take fewer samples to converge to the optimal policy. Despite that efficiency, model-based methods may not learn the op...
Main Author: | Kao, Hai Feng |
---|---|
Language: | English |
Published: |
University of British Columbia
2012
|
Online Access: | http://hdl.handle.net/2429/39889 |
Similar Items
-
Optimal planning with approximate model-based reinforcement learning
by: Kao, Hai Feng
Published: (2012) -
Optimal planning with approximate model-based reinforcement learning
by: Kao, Hai Feng
Published: (2012) -
Cooperative Multi-Agent Reinforcement Learning With Approximate Model Learning
by: Young Joon Park, et al.
Published: (2020-01-01) -
Multi-agent reinforcement learning with approximate model learning for competitive games.
by: Young Joon Park, et al.
Published: (2019-01-01) -
Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks
by: Perundurai Rajasekaran, Siddharthan
Published: (2017)