Computational Benefits of Intermediate Rewards for Goal-Reaching Policy Learning
Many goal-reaching reinforcement learning (RL) tasks have empirically verified that rewarding the agent on subgoals improves convergence speed and practical performance. We attempt to provide a theoretical framework to quantify the computational benefits of rewarding the completion of subgoals, in t...
Main Authors: | Baek, C. (Author), Jiao, J. (Author), Ma, Y. (Author), Zhai, Y. (Author), Zhou, Z. (Author) |
---|---|
Format: | Article |
Language: | English |
Published: |
AI Access Foundation
2022
|
Subjects: | |
Online Access: | View Fulltext in Publisher |
Similar Items
-
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
by: Junjie Zeng, et al.
Published: (2019-01-01) -
Improved automatic discovery of subgoals for options in hierarchical
by: R. Matthew Kretchmar, et al.
Published: (2003-10-01) -
Speed Optimization for Incremental Updating of Grid-based Distance Maps
by: Long Qin, et al.
Published: (2019-05-01) -
Subgoal-Based Reward Shaping to Improve Efficiency in Reinforcement Learning
by: Takato Okudo, et al.
Published: (2021-01-01) -
Multipath TCP-Based IoT Communication Evaluation: From the Perspective of Multipath Management with Machine Learning
by: Ruiwen Ji, et al.
Published: (2020-11-01)