Summary: | 碩士 === 國立臺灣科技大學 === 機械工程系 === 102 === When in a totally strange environment, with unknown position of the vehicle and no descriptions of the environment, the estimation of the vehicle’s position and the reconstruction of the model of the environment can only rely on the variation of the measured distance between the moving vehicle and the fixed feature in the environment, given by the on-board sensors. However, image features are sensitive to illumination conditions of the environment, as the images taken in the same environment at different times under different illumination conditions may result in different feature points selected for each image after feature point recognition operation, therefore the position information of the feature points cannot truly describe the environment and can be used no more after the positioning process.
To solve the problem mentioned earlier, this thesis used homemade an unmanned Tri-Omni-Directional Wheeled Mobile Robot carrying an embedded system that uses RGB-D camera(Microsoft Kinect)to synchronously construct an image database and achieve SLAM based on image feature points matching and point cloud matching in an unknown environment. The image and depth information provided by the RGB-D camera allows us to consider fitting both the image feature points and the point cloud depth measurements by Iterative Closest Point(ICP)to compute the relative displacement at the same time, while performing stitching the point clouds together to build the model of the environment.
After the image database was established, positioning can be done by using three different sensors onboard the unmanned robot:
1. RGB-D camera:Still working in the same environment, using the same method to measure relative displacement.
2. Ordinary camera:Although depth information cannot be obtained using ordinary camera, feature points can still be found using real-time image and historical image. Since historical images contain depth information, relative displacement between two images taken in different locations can still be computed using the depth information of the feature points.
3. Laser range finder:Perform ICP to all data contained in the point cloud, the result is the relative displacement.
Vertified by experiments, future users carrying any of the three sensors mentioned above can synchronously revised the position of themselves and update the location where the historical images are taken in the image database through SLAM using the computed relative displacement as measurements, achieving the subjective of self-positioning and constantly updating the accuracy of the database in known environment model.
|