Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning

© 2020 IEEE. A common approach for defining a reward function for multi-objective reinforcement learning (MORL) problems is the weighted sum of the multiple objectives. The weights are then treated as design parameters dependent on the expertise (and preference) of the person performing the learning...

Full description

Bibliographic Details
Main Authors: Kusari, Arpan (Author), How, Jonathan P. (Author)
Format: Article
Language:English
Published: IEEE, 2021-10-28T15:57:58Z.
Subjects:
Online Access:Get fulltext
LEADER 02081 am a22001693u 4500
001 136715
042 |a dc 
100 1 0 |a Kusari, Arpan  |e author 
700 1 0 |a How, Jonathan P.  |e author 
245 0 0 |a Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning 
260 |b IEEE,   |c 2021-10-28T15:57:58Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/136715 
520 |a © 2020 IEEE. A common approach for defining a reward function for multi-objective reinforcement learning (MORL) problems is the weighted sum of the multiple objectives. The weights are then treated as design parameters dependent on the expertise (and preference) of the person performing the learning, with the typical result that a new solution is required for any change in these settings. This paper investigates the relationship between the reward function and the optimal value function for MORL; specifically addressing the question of how to approximate the optimal value function well beyond the set of weights for which the optimization problem was actually solved, thereby avoiding the need to recompute for any particular choice. We prove that the value function transforms smoothly given a transformation of weights of the reward function (and thus a smooth interpolation in the policy space). A Gaussian process is used to obtain a smooth interpolation over the reward function weights of the optimal value function for three well-known examples: Gridworld, Objectworld and Pendulum. The results show that the interpolation can provide robust values for sample states and actions in both discrete and continuous domain problems. Significant advantages arise from utilizing this interpolation technique in the domain of autonomous vehicles: easy, instant adaptation of user preferences while driving and true randomization of obstacle vehicle behavior preferences during training. 
546 |a en 
655 7 |a Article 
773 |t 10.1109/icra40945.2020.9197456 
773 |t Proceedings - IEEE International Conference on Robotics and Automation