Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database
碩士 === 國立臺灣科技大學 === 機械工程系 === 102 === When in a totally strange environment, with unknown position of the vehicle and no descriptions of the environment, the estimation of the vehicle’s position and the reconstruction of the model of the environment can only rely on the variation of the measured dis...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2014
|
Online Access: | http://ndltd.ncl.edu.tw/handle/31434491840680213345 |
id |
ndltd-TW-102NTUS5489042 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-102NTUS54890422016-09-25T04:04:33Z http://ndltd.ncl.edu.tw/handle/31434491840680213345 Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database 以深度影像資料庫為基礎的嵌入式全向輪機器人同步定位與建圖 JING-JHOU CIOU 邱敬洲 碩士 國立臺灣科技大學 機械工程系 102 When in a totally strange environment, with unknown position of the vehicle and no descriptions of the environment, the estimation of the vehicle’s position and the reconstruction of the model of the environment can only rely on the variation of the measured distance between the moving vehicle and the fixed feature in the environment, given by the on-board sensors. However, image features are sensitive to illumination conditions of the environment, as the images taken in the same environment at different times under different illumination conditions may result in different feature points selected for each image after feature point recognition operation, therefore the position information of the feature points cannot truly describe the environment and can be used no more after the positioning process. To solve the problem mentioned earlier, this thesis used homemade an unmanned Tri-Omni-Directional Wheeled Mobile Robot carrying an embedded system that uses RGB-D camera(Microsoft Kinect)to synchronously construct an image database and achieve SLAM based on image feature points matching and point cloud matching in an unknown environment. The image and depth information provided by the RGB-D camera allows us to consider fitting both the image feature points and the point cloud depth measurements by Iterative Closest Point(ICP)to compute the relative displacement at the same time, while performing stitching the point clouds together to build the model of the environment. After the image database was established, positioning can be done by using three different sensors onboard the unmanned robot: 1. RGB-D camera:Still working in the same environment, using the same method to measure relative displacement. 2. Ordinary camera:Although depth information cannot be obtained using ordinary camera, feature points can still be found using real-time image and historical image. Since historical images contain depth information, relative displacement between two images taken in different locations can still be computed using the depth information of the feature points. 3. Laser range finder:Perform ICP to all data contained in the point cloud, the result is the relative displacement. Vertified by experiments, future users carrying any of the three sensors mentioned above can synchronously revised the position of themselves and update the location where the historical images are taken in the image database through SLAM using the computed relative displacement as measurements, achieving the subjective of self-positioning and constantly updating the accuracy of the database in known environment model. Wei-wen Kao 高維文 2014 學位論文 ; thesis 102 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺灣科技大學 === 機械工程系 === 102 === When in a totally strange environment, with unknown position of the vehicle and no descriptions of the environment, the estimation of the vehicle’s position and the reconstruction of the model of the environment can only rely on the variation of the measured distance between the moving vehicle and the fixed feature in the environment, given by the on-board sensors. However, image features are sensitive to illumination conditions of the environment, as the images taken in the same environment at different times under different illumination conditions may result in different feature points selected for each image after feature point recognition operation, therefore the position information of the feature points cannot truly describe the environment and can be used no more after the positioning process.
To solve the problem mentioned earlier, this thesis used homemade an unmanned Tri-Omni-Directional Wheeled Mobile Robot carrying an embedded system that uses RGB-D camera(Microsoft Kinect)to synchronously construct an image database and achieve SLAM based on image feature points matching and point cloud matching in an unknown environment. The image and depth information provided by the RGB-D camera allows us to consider fitting both the image feature points and the point cloud depth measurements by Iterative Closest Point(ICP)to compute the relative displacement at the same time, while performing stitching the point clouds together to build the model of the environment.
After the image database was established, positioning can be done by using three different sensors onboard the unmanned robot:
1. RGB-D camera:Still working in the same environment, using the same method to measure relative displacement.
2. Ordinary camera:Although depth information cannot be obtained using ordinary camera, feature points can still be found using real-time image and historical image. Since historical images contain depth information, relative displacement between two images taken in different locations can still be computed using the depth information of the feature points.
3. Laser range finder:Perform ICP to all data contained in the point cloud, the result is the relative displacement.
Vertified by experiments, future users carrying any of the three sensors mentioned above can synchronously revised the position of themselves and update the location where the historical images are taken in the image database through SLAM using the computed relative displacement as measurements, achieving the subjective of self-positioning and constantly updating the accuracy of the database in known environment model.
|
author2 |
Wei-wen Kao |
author_facet |
Wei-wen Kao JING-JHOU CIOU 邱敬洲 |
author |
JING-JHOU CIOU 邱敬洲 |
spellingShingle |
JING-JHOU CIOU 邱敬洲 Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database |
author_sort |
JING-JHOU CIOU |
title |
Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database |
title_short |
Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database |
title_full |
Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database |
title_fullStr |
Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database |
title_full_unstemmed |
Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database |
title_sort |
embedded omni-directional wheeled mobile robot visual slam based on depth image database |
publishDate |
2014 |
url |
http://ndltd.ncl.edu.tw/handle/31434491840680213345 |
work_keys_str_mv |
AT jingjhouciou embeddedomnidirectionalwheeledmobilerobotvisualslambasedondepthimagedatabase AT qiūjìngzhōu embeddedomnidirectionalwheeledmobilerobotvisualslambasedondepthimagedatabase AT jingjhouciou yǐshēndùyǐngxiàngzīliàokùwèijīchǔdeqiànrùshìquánxiànglúnjīqìréntóngbùdìngwèiyǔjiàntú AT qiūjìngzhōu yǐshēndùyǐngxiàngzīliàokùwèijīchǔdeqiànrùshìquánxiànglúnjīqìréntóngbùdìngwèiyǔjiàntú |
_version_ |
1718384876902678528 |