Indoor robot path planning assisted by wireless network
Abstract Indoor robot global path planning needs to complete the motion between the starting point and the target point according to robot position command transmitted by the wireless network. Behavior dynamics and rolling windows in global path planning methods have limitations in their application...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2019-05-01
|
Series: | EURASIP Journal on Wireless Communications and Networking |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s13638-019-1437-x |
id |
doaj-ff89e65db9f64067b1387ab886f7571f |
---|---|
record_format |
Article |
spelling |
doaj-ff89e65db9f64067b1387ab886f7571f2020-11-25T03:21:55ZengSpringerOpenEURASIP Journal on Wireless Communications and Networking1687-14992019-05-01201911910.1186/s13638-019-1437-xIndoor robot path planning assisted by wireless networkXiaohua Wang0Tengteng Nie1Daixian Zhu2College of Electrics and Information, Xi’an Polytechnic UniversityCollege of Electrics and Information, Xi’an Polytechnic UniversityCollege of Communication and Information Engineer, Xi’an University of Science and TechnologyAbstract Indoor robot global path planning needs to complete the motion between the starting point and the target point according to robot position command transmitted by the wireless network. Behavior dynamics and rolling windows in global path planning methods have limitations in their applications because the path may not be optimal, there could be a pseudo attractor or blind search in an environment with a large state space, there could be an environment where offline learning is not applicable to real-time changes, or there could be a need to set the probability of selecting the robot action. To solve these problems, we propose a behavior dynamics and rolling windows approach to a path planning which is based on online reinforcement learning. It applies Q learning to optimize the behavior dynamics model parameters to improve the performance, behavior dynamics guides the learning process of Q learning and improves learning efficiency, and each round of intensive learning action selection knowledge is gradually corrected as the Q table is updated. The learning process is optimized. The simulation results show that this method has achieved remarkable improvement in path planning. And, in the actual experiment, the robot obtains the target location information by wireless network, and plans an optimized and smooth global path online.http://link.springer.com/article/10.1186/s13638-019-1437-xBehavior dynamicsRolling windowsQ learningPath planning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Xiaohua Wang Tengteng Nie Daixian Zhu |
spellingShingle |
Xiaohua Wang Tengteng Nie Daixian Zhu Indoor robot path planning assisted by wireless network EURASIP Journal on Wireless Communications and Networking Behavior dynamics Rolling windows Q learning Path planning |
author_facet |
Xiaohua Wang Tengteng Nie Daixian Zhu |
author_sort |
Xiaohua Wang |
title |
Indoor robot path planning assisted by wireless network |
title_short |
Indoor robot path planning assisted by wireless network |
title_full |
Indoor robot path planning assisted by wireless network |
title_fullStr |
Indoor robot path planning assisted by wireless network |
title_full_unstemmed |
Indoor robot path planning assisted by wireless network |
title_sort |
indoor robot path planning assisted by wireless network |
publisher |
SpringerOpen |
series |
EURASIP Journal on Wireless Communications and Networking |
issn |
1687-1499 |
publishDate |
2019-05-01 |
description |
Abstract Indoor robot global path planning needs to complete the motion between the starting point and the target point according to robot position command transmitted by the wireless network. Behavior dynamics and rolling windows in global path planning methods have limitations in their applications because the path may not be optimal, there could be a pseudo attractor or blind search in an environment with a large state space, there could be an environment where offline learning is not applicable to real-time changes, or there could be a need to set the probability of selecting the robot action. To solve these problems, we propose a behavior dynamics and rolling windows approach to a path planning which is based on online reinforcement learning. It applies Q learning to optimize the behavior dynamics model parameters to improve the performance, behavior dynamics guides the learning process of Q learning and improves learning efficiency, and each round of intensive learning action selection knowledge is gradually corrected as the Q table is updated. The learning process is optimized. The simulation results show that this method has achieved remarkable improvement in path planning. And, in the actual experiment, the robot obtains the target location information by wireless network, and plans an optimized and smooth global path online. |
topic |
Behavior dynamics Rolling windows Q learning Path planning |
url |
http://link.springer.com/article/10.1186/s13638-019-1437-x |
work_keys_str_mv |
AT xiaohuawang indoorrobotpathplanningassistedbywirelessnetwork AT tengtengnie indoorrobotpathplanningassistedbywirelessnetwork AT daixianzhu indoorrobotpathplanningassistedbywirelessnetwork |
_version_ |
1724612418483519488 |