Latent Structure Matching for Knowledge Transfer in Reinforcement Learning
Reinforcement learning algorithms usually require a large number of empirical samples and give rise to a slow convergence in practical applications. One solution is to introduce transfer learning: Knowledge from well-learned source tasks can be reused to reduce sample request and accelerate the lear...
Main Authors: | Yi Zhou, Fenglei Yang |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-02-01
|
Series: | Future Internet |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-5903/12/2/36 |
Similar Items
-
Reusing Source Task Knowledge via Transfer Approximator in Reinforcement Transfer Learning
by: Qiao Cheng, et al.
Published: (2018-12-01) -
A Low-Sampling-Rate Trajectory Matching Algorithm in Combination of History Trajectory and Reinforcement Learning
by: SUN Wenbin, et al.
Published: (2016-11-01) -
Offline Multi-Policy Gradient for Latent Mixture Environments
by: Xiaoguang Li, et al.
Published: (2021-01-01) -
Bayesian methods for knowledge transfer and policy search in reinforcement learning
by: Wilson, Aaron (Aaron Creighton)
Published: (2012) -
MPR-RL: Multi-Prior Regularized Reinforcement Learning for Knowledge Transfer
by: Stork, J.A, et al.
Published: (2022)