Multiobjective model-free learning for robot pathfinding with environmental disturbances

This article addresses the robot pathfinding problem with environmental disturbances, where a solution to this problem must consider potential risks inherent in an uncertain and stochastic environment. For example, the movements of an underwater robot can be seriously disturbed by ocean currents, an...

Full description

Bibliographic Details
Main Authors: Changyun Wei, Fusheng Ni
Format: Article
Language:English
Published: SAGE Publishing 2019-11-01
Series:International Journal of Advanced Robotic Systems
Online Access:https://doi.org/10.1177/1729881419885703
Description
Summary:This article addresses the robot pathfinding problem with environmental disturbances, where a solution to this problem must consider potential risks inherent in an uncertain and stochastic environment. For example, the movements of an underwater robot can be seriously disturbed by ocean currents, and thus any applied control actions to the robot cannot exactly lead to the desired locations. Reinforcement learning is a formal methodology that has been extensively studied in many sequential decision-making domains with uncertainty, but most reinforcement learning algorithms consider only a single objective encoded by a scalar reward. However, the robot pathfinding problem with environmental disturbances naturally promotes multiple conflicting objectives. Specifically, in this work, the robot has to minimise its moving distance so as to save energy, and, moreover, it has to keep away from unsafe regions as far as possible. To this end, we first propose a multiobjective model-free learning framework, and then proceed to investigate an appropriate action selection strategy by improving a baseline with respect to two dimensions. To demonstrate the effectiveness of the proposed learning framework and evaluate the performance of three action selection strategies, we also carry out an empirical study in a simulated environment.
ISSN:1729-8814