A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake Case
Autonomous Surfaces Vehicles (ASV) are incredibly useful for the continuous monitoring and exploring task of water resources due to their autonomy, mobility, and relative low cost. In the path planning context, the patrolling problem is usually addressed with heuristics approaches, such as Genetic A...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9252944/ |
id |
doaj-84fba415a0c9430d91ac2cec01e24437 |
---|---|
record_format |
Article |
spelling |
doaj-84fba415a0c9430d91ac2cec01e244372021-03-30T04:34:05ZengIEEEIEEE Access2169-35362020-01-01820407620409310.1109/ACCESS.2020.30369389252944A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake CaseSamuel Yanes Luis0https://orcid.org/0000-0002-7796-3599Daniel Gutierrez Reina1Sergio L. Toral Marin2Department of Electronic Engineering, Technical School of Engineering, University of Seville, Sevilla, SpainDepartment of Electronic Engineering, Technical School of Engineering, University of Seville, Sevilla, SpainDepartment of Electronic Engineering, Technical School of Engineering, University of Seville, Sevilla, SpainAutonomous Surfaces Vehicles (ASV) are incredibly useful for the continuous monitoring and exploring task of water resources due to their autonomy, mobility, and relative low cost. In the path planning context, the patrolling problem is usually addressed with heuristics approaches, such as Genetic Algorithms (GA) or Reinforcement Learning (RL) because of the complexity and high dimensionality of the problem. In this paper, the patrolling problem of Ypacarai Lake (Asunción, Paraguay) has been formulated as a Markov Decision Process (MDP) for two possible cases: the homogeneous and the non-homogeneous scenarios. A tailored reward function has been designed for the non-homogeneous case. Two Deep Reinforcement Learning algorithms such as Deep Q-Learning (DQL) and Double Deep Q-Learning (DDQL) have been evaluated to solve the patrolling problem. Furthermore, due to the high number of parameters and hyperparameters involved in the algorithms, a thorough search has been conducted to find the best values for training the neural networks and the proposed reward function. According to the results, a suitable configuration of the parameters allows better results for coverage, obtaining more than the 93% of the lake surface on average. In addition, the proposed approach achieves higher sample redundancy of important zones than other common-used algorithms for non-homogeneous coverage path planning such as Policy Gradient, lawnmower algorithm or random exploration, achieving an 64% improvement of the mean time between visits.https://ieeexplore.ieee.org/document/9252944/Deep reinforcement learningmonitoringpath planningautonomous surface vehiclepatrollingcomplete coverage |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Samuel Yanes Luis Daniel Gutierrez Reina Sergio L. Toral Marin |
spellingShingle |
Samuel Yanes Luis Daniel Gutierrez Reina Sergio L. Toral Marin A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake Case IEEE Access Deep reinforcement learning monitoring path planning autonomous surface vehicle patrolling complete coverage |
author_facet |
Samuel Yanes Luis Daniel Gutierrez Reina Sergio L. Toral Marin |
author_sort |
Samuel Yanes Luis |
title |
A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake Case |
title_short |
A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake Case |
title_full |
A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake Case |
title_fullStr |
A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake Case |
title_full_unstemmed |
A Deep Reinforcement Learning Approach for the Patrolling Problem of Water Resources Through Autonomous Surface Vehicles: The Ypacarai Lake Case |
title_sort |
deep reinforcement learning approach for the patrolling problem of water resources through autonomous surface vehicles: the ypacarai lake case |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
Autonomous Surfaces Vehicles (ASV) are incredibly useful for the continuous monitoring and exploring task of water resources due to their autonomy, mobility, and relative low cost. In the path planning context, the patrolling problem is usually addressed with heuristics approaches, such as Genetic Algorithms (GA) or Reinforcement Learning (RL) because of the complexity and high dimensionality of the problem. In this paper, the patrolling problem of Ypacarai Lake (Asunción, Paraguay) has been formulated as a Markov Decision Process (MDP) for two possible cases: the homogeneous and the non-homogeneous scenarios. A tailored reward function has been designed for the non-homogeneous case. Two Deep Reinforcement Learning algorithms such as Deep Q-Learning (DQL) and Double Deep Q-Learning (DDQL) have been evaluated to solve the patrolling problem. Furthermore, due to the high number of parameters and hyperparameters involved in the algorithms, a thorough search has been conducted to find the best values for training the neural networks and the proposed reward function. According to the results, a suitable configuration of the parameters allows better results for coverage, obtaining more than the 93% of the lake surface on average. In addition, the proposed approach achieves higher sample redundancy of important zones than other common-used algorithms for non-homogeneous coverage path planning such as Policy Gradient, lawnmower algorithm or random exploration, achieving an 64% improvement of the mean time between visits. |
topic |
Deep reinforcement learning monitoring path planning autonomous surface vehicle patrolling complete coverage |
url |
https://ieeexplore.ieee.org/document/9252944/ |
work_keys_str_mv |
AT samuelyanesluis adeepreinforcementlearningapproachforthepatrollingproblemofwaterresourcesthroughautonomoussurfacevehiclestheypacarailakecase AT danielgutierrezreina adeepreinforcementlearningapproachforthepatrollingproblemofwaterresourcesthroughautonomoussurfacevehiclestheypacarailakecase AT sergioltoralmarin adeepreinforcementlearningapproachforthepatrollingproblemofwaterresourcesthroughautonomoussurfacevehiclestheypacarailakecase AT samuelyanesluis deepreinforcementlearningapproachforthepatrollingproblemofwaterresourcesthroughautonomoussurfacevehiclestheypacarailakecase AT danielgutierrezreina deepreinforcementlearningapproachforthepatrollingproblemofwaterresourcesthroughautonomoussurfacevehiclestheypacarailakecase AT sergioltoralmarin deepreinforcementlearningapproachforthepatrollingproblemofwaterresourcesthroughautonomoussurfacevehiclestheypacarailakecase |
_version_ |
1724181557874262016 |