Multiobjective model-free learning for robot pathfinding with environmental disturbances

This article addresses the robot pathfinding problem with environmental disturbances, where a solution to this problem must consider potential risks inherent in an uncertain and stochastic environment. For example, the movements of an underwater robot can be seriously disturbed by ocean currents, an...

Full description

Bibliographic Details
Main Authors: Changyun Wei, Fusheng Ni
Format: Article
Language:English
Published: SAGE Publishing 2019-11-01
Series:International Journal of Advanced Robotic Systems
Online Access:https://doi.org/10.1177/1729881419885703
id doaj-9c470ae58fb3458884d02d0209dcea42
record_format Article
spelling doaj-9c470ae58fb3458884d02d0209dcea422020-11-25T03:40:36ZengSAGE PublishingInternational Journal of Advanced Robotic Systems1729-88142019-11-011610.1177/1729881419885703Multiobjective model-free learning for robot pathfinding with environmental disturbancesChangyun WeiFusheng NiThis article addresses the robot pathfinding problem with environmental disturbances, where a solution to this problem must consider potential risks inherent in an uncertain and stochastic environment. For example, the movements of an underwater robot can be seriously disturbed by ocean currents, and thus any applied control actions to the robot cannot exactly lead to the desired locations. Reinforcement learning is a formal methodology that has been extensively studied in many sequential decision-making domains with uncertainty, but most reinforcement learning algorithms consider only a single objective encoded by a scalar reward. However, the robot pathfinding problem with environmental disturbances naturally promotes multiple conflicting objectives. Specifically, in this work, the robot has to minimise its moving distance so as to save energy, and, moreover, it has to keep away from unsafe regions as far as possible. To this end, we first propose a multiobjective model-free learning framework, and then proceed to investigate an appropriate action selection strategy by improving a baseline with respect to two dimensions. To demonstrate the effectiveness of the proposed learning framework and evaluate the performance of three action selection strategies, we also carry out an empirical study in a simulated environment.https://doi.org/10.1177/1729881419885703
collection DOAJ
language English
format Article
sources DOAJ
author Changyun Wei
Fusheng Ni
spellingShingle Changyun Wei
Fusheng Ni
Multiobjective model-free learning for robot pathfinding with environmental disturbances
International Journal of Advanced Robotic Systems
author_facet Changyun Wei
Fusheng Ni
author_sort Changyun Wei
title Multiobjective model-free learning for robot pathfinding with environmental disturbances
title_short Multiobjective model-free learning for robot pathfinding with environmental disturbances
title_full Multiobjective model-free learning for robot pathfinding with environmental disturbances
title_fullStr Multiobjective model-free learning for robot pathfinding with environmental disturbances
title_full_unstemmed Multiobjective model-free learning for robot pathfinding with environmental disturbances
title_sort multiobjective model-free learning for robot pathfinding with environmental disturbances
publisher SAGE Publishing
series International Journal of Advanced Robotic Systems
issn 1729-8814
publishDate 2019-11-01
description This article addresses the robot pathfinding problem with environmental disturbances, where a solution to this problem must consider potential risks inherent in an uncertain and stochastic environment. For example, the movements of an underwater robot can be seriously disturbed by ocean currents, and thus any applied control actions to the robot cannot exactly lead to the desired locations. Reinforcement learning is a formal methodology that has been extensively studied in many sequential decision-making domains with uncertainty, but most reinforcement learning algorithms consider only a single objective encoded by a scalar reward. However, the robot pathfinding problem with environmental disturbances naturally promotes multiple conflicting objectives. Specifically, in this work, the robot has to minimise its moving distance so as to save energy, and, moreover, it has to keep away from unsafe regions as far as possible. To this end, we first propose a multiobjective model-free learning framework, and then proceed to investigate an appropriate action selection strategy by improving a baseline with respect to two dimensions. To demonstrate the effectiveness of the proposed learning framework and evaluate the performance of three action selection strategies, we also carry out an empirical study in a simulated environment.
url https://doi.org/10.1177/1729881419885703
work_keys_str_mv AT changyunwei multiobjectivemodelfreelearningforrobotpathfindingwithenvironmentaldisturbances
AT fushengni multiobjectivemodelfreelearningforrobotpathfindingwithenvironmentaldisturbances
_version_ 1724533937915559936