Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation
Occupied grid maps are sufficient for mobile robots to complete metric navigation tasks in domestic environments. However, they lack semantic information to endow the robots with the ability of social goal selection and human-friendly operation modes. In this paper, we propose an object semantic gri...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-08-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/10/17/5782 |
id |
doaj-680e016470a24421bd90b8622960ae53 |
---|---|
record_format |
Article |
spelling |
doaj-680e016470a24421bd90b8622960ae532020-11-25T03:52:12ZengMDPI AGApplied Sciences2076-34172020-08-01105782578210.3390/app10175782Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot NavigationXianyu Qi0Wei Wang1Ziwei Liao2Xiaoyu Zhang3Dongsheng Yang4Ran Wei5Robotics Institute, Beihang University, Beijing 100191, ChinaRobotics Institute, Beihang University, Beijing 100191, ChinaRobotics Institute, Beihang University, Beijing 100191, ChinaRobotics Institute, Beihang University, Beijing 100191, ChinaRobotics Institute, Beihang University, Beijing 100191, ChinaBeijing Evolver Robotics Technology Company limited, Beijing 100192, ChinaOccupied grid maps are sufficient for mobile robots to complete metric navigation tasks in domestic environments. However, they lack semantic information to endow the robots with the ability of social goal selection and human-friendly operation modes. In this paper, we propose an object semantic grid mapping system with 2D Light Detection and Ranging (LiDAR) and RGB-D sensors to solve this problem. At first, we use a laser-based Simultaneous Localization and Mapping (SLAM) to generate an occupied grid map and obtain a robot trajectory. Then, we employ object detection to get an object’s semantics of color images and use joint interpolation to refine camera poses. Based on object detection, depth images, and interpolated poses, we build a point cloud with object instances. To generate object-oriented minimum bounding rectangles, we propose a method for extracting the dominant directions of the room. Furthermore, we build object goal spaces to help the robots select navigation goals conveniently and socially. We have used the Robot@Home dataset to verify the system; the verification results show that our system is effective.https://www.mdpi.com/2076-3417/10/17/5782object semantic grid map2D LiDARRGB-D cameradomestic navigation |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Xianyu Qi Wei Wang Ziwei Liao Xiaoyu Zhang Dongsheng Yang Ran Wei |
spellingShingle |
Xianyu Qi Wei Wang Ziwei Liao Xiaoyu Zhang Dongsheng Yang Ran Wei Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation Applied Sciences object semantic grid map 2D LiDAR RGB-D camera domestic navigation |
author_facet |
Xianyu Qi Wei Wang Ziwei Liao Xiaoyu Zhang Dongsheng Yang Ran Wei |
author_sort |
Xianyu Qi |
title |
Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation |
title_short |
Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation |
title_full |
Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation |
title_fullStr |
Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation |
title_full_unstemmed |
Object Semantic Grid Mapping with 2D LiDAR and RGB-D Camera for Domestic Robot Navigation |
title_sort |
object semantic grid mapping with 2d lidar and rgb-d camera for domestic robot navigation |
publisher |
MDPI AG |
series |
Applied Sciences |
issn |
2076-3417 |
publishDate |
2020-08-01 |
description |
Occupied grid maps are sufficient for mobile robots to complete metric navigation tasks in domestic environments. However, they lack semantic information to endow the robots with the ability of social goal selection and human-friendly operation modes. In this paper, we propose an object semantic grid mapping system with 2D Light Detection and Ranging (LiDAR) and RGB-D sensors to solve this problem. At first, we use a laser-based Simultaneous Localization and Mapping (SLAM) to generate an occupied grid map and obtain a robot trajectory. Then, we employ object detection to get an object’s semantics of color images and use joint interpolation to refine camera poses. Based on object detection, depth images, and interpolated poses, we build a point cloud with object instances. To generate object-oriented minimum bounding rectangles, we propose a method for extracting the dominant directions of the room. Furthermore, we build object goal spaces to help the robots select navigation goals conveniently and socially. We have used the Robot@Home dataset to verify the system; the verification results show that our system is effective. |
topic |
object semantic grid map 2D LiDAR RGB-D camera domestic navigation |
url |
https://www.mdpi.com/2076-3417/10/17/5782 |
work_keys_str_mv |
AT xianyuqi objectsemanticgridmappingwith2dlidarandrgbdcamerafordomesticrobotnavigation AT weiwang objectsemanticgridmappingwith2dlidarandrgbdcamerafordomesticrobotnavigation AT ziweiliao objectsemanticgridmappingwith2dlidarandrgbdcamerafordomesticrobotnavigation AT xiaoyuzhang objectsemanticgridmappingwith2dlidarandrgbdcamerafordomesticrobotnavigation AT dongshengyang objectsemanticgridmappingwith2dlidarandrgbdcamerafordomesticrobotnavigation AT ranwei objectsemanticgridmappingwith2dlidarandrgbdcamerafordomesticrobotnavigation |
_version_ |
1724483773124313088 |