Deep Q-Network with Predictive State Models in Partially Observable Domains
While deep reinforcement learning (DRL) has achieved great success in some large domains, most of the related algorithms assume that the state of the underlying system is fully observable. However, many real-world problems are actually partially observable. For systems with continuous observation, m...
Main Authors: | Danning Yu, Kun Ni, Yunlong Liu |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2020-01-01
|
Series: | Mathematical Problems in Engineering |
Online Access: | http://dx.doi.org/10.1155/2020/1596385 |
Similar Items
-
Dynamic Spectrum Access with Deep Q-learning in Densely Occupied and Partially Observable Environments
by: S. Tomovic, et al.
Published: (2021-07-01) -
Kalman Based Finite State Controller for Partially Observable Domains
by: Alp Sardag, et al.
Published: (2006-12-01) -
Kalman Based Finite State Controller for Partially Observable Domains
by: H. Levent Akin, et al.
Published: (2008-11-01) -
Regularized Taylor Echo State Networks for Predictive Control of Partially Observed Systems
by: Kui Xiang, et al.
Published: (2016-01-01) -
Sim-to-Real Quadrotor Landing via Sequential Deep Q-Networks and Domain Randomization
by: Riccardo Polvara, et al.
Published: (2020-02-01)