Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction
碩士 === 國立高雄大學 === 資訊工程學系碩士班 === 104 === 3D reconstruction techniques and systems have become more common with the advancement of relevant hardware devices. Among the various reconstruction systems, the LiDAR (Light Detection and Ranging) device is one of the most versatile and effective devices for...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2016
|
Online Access: | http://ndltd.ncl.edu.tw/handle/71426617509233286477 |
id |
ndltd-TW-104NUK05392002 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-104NUK053920022017-09-24T04:40:27Z http://ndltd.ncl.edu.tw/handle/71426617509233286477 Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction 採用MFI改良本體移動估測以提升三維重建準確度之方法 Tsung-I Chen 陳宗毅 碩士 國立高雄大學 資訊工程學系碩士班 104 3D reconstruction techniques and systems have become more common with the advancement of relevant hardware devices. Among the various reconstruction systems, the LiDAR (Light Detection and Ranging) device is one of the most versatile and effective devices for acquiring 3D measurement on a large scale. However, the LiDAR alone does not possess the ability to acquire color and motion information. To facilitate the integration of large scale 3D data acquired by the LiDAR device, we propose an integrated 3D reconstruction system consisting of a LiDAR device and binocular stereo vision cameras. The integrated system is able to register the image data with range data from the LiDAR device to obtain colour mapped 3D point clouds. In addition, the cameras facilitate the system's ego-motion estimation, such that data from different spatial and temporal neighbourhoods can be merged in a complementary manner, towards the reconstruction of more accurate and detailed 3D models. In this work we implement an integrated 3D reconstruction system with two video cameras and a LiDAR, all mounted on top of a electrical vehicle. The cameras are calibrated and fixed with constant internal parameters. The acquired images are calibrated and SURF is used to perform feature detection. Feature matching is performed spatially using SGM to ensure the quantity and quality of the feature points. The camera parameters are used in conjunction with the matched feature points to determine the 3D coordinates. Temporal feature matching is also performed according to the time shifted image sequences, with the rotational and translational variations calculated for the feature points temporally. To reduce erroneous matches in feature tracking, we use Multi-frame Feature Integration and compare the originally matched features with integrated features, such that modifications can be performed to avoid affecting subsequent estimates. Finally, the system's path is obtained by accumulation of the feature transformations. To obtain texture mapped 3D model, we incorporate data obtained by the LiDAR device and the cameras using rigid transformations. In this work, the external parameters for the rigid transformations are initially solved in a linear manner, the Levenberg-Marquardt algorithm is then used to obtain the optimized external parameters. 3D points are then projected onto 2D images to verify the accuracy of the external parameters. Finally, the estimated path is used to align the point clouds and perform texture mapping from the images, such that a large-scale, texture mapped, 3D model can be obtained. Chia-Yen Chen 陳佳妍 2016 學位論文 ; thesis 95 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立高雄大學 === 資訊工程學系碩士班 === 104 === 3D reconstruction techniques and systems have become more common with the advancement of relevant hardware devices. Among the various reconstruction systems, the LiDAR (Light Detection and Ranging) device is one of the most versatile and effective devices for acquiring 3D measurement on a large scale. However, the LiDAR alone does not possess the ability to acquire color and motion information. To facilitate the integration of large scale 3D data acquired by the LiDAR device, we propose an integrated 3D reconstruction system consisting of a LiDAR device and binocular stereo vision cameras. The integrated system is able to register the image data with range data from the LiDAR device to obtain colour mapped 3D point clouds. In addition, the cameras facilitate the system's ego-motion estimation, such that data from different spatial and temporal neighbourhoods can be merged in a complementary manner, towards the reconstruction of more accurate and detailed 3D models.
In this work we implement an integrated 3D reconstruction system with two video cameras and a LiDAR, all mounted on top of a electrical vehicle. The cameras are calibrated and fixed with constant internal parameters. The acquired images are calibrated and SURF is used to perform feature detection. Feature matching is performed spatially using SGM to ensure the quantity and quality of the feature points. The camera parameters are used in conjunction with the matched feature points to determine the 3D coordinates. Temporal feature matching is also performed according to the time shifted image sequences, with the rotational and translational variations calculated for the feature points temporally. To reduce erroneous matches in feature tracking, we use Multi-frame Feature Integration and compare the originally matched features with integrated features, such that modifications can be performed to avoid affecting subsequent estimates. Finally, the system's path is obtained by accumulation of the feature transformations.
To obtain texture mapped 3D model, we incorporate data obtained by the LiDAR device and the cameras using rigid transformations. In this work, the external parameters for the rigid transformations are initially solved in a linear manner, the Levenberg-Marquardt algorithm is then used to obtain the optimized external parameters. 3D points are then projected onto 2D images to verify the accuracy of the external parameters. Finally, the estimated path is used to align the point clouds and perform texture mapping from the images, such that a large-scale, texture mapped, 3D model can be obtained.
|
author2 |
Chia-Yen Chen |
author_facet |
Chia-Yen Chen Tsung-I Chen 陳宗毅 |
author |
Tsung-I Chen 陳宗毅 |
spellingShingle |
Tsung-I Chen 陳宗毅 Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction |
author_sort |
Tsung-I Chen |
title |
Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction |
title_short |
Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction |
title_full |
Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction |
title_fullStr |
Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction |
title_full_unstemmed |
Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction |
title_sort |
using mfi to improve ego-motion estimation towards more accurate 3d reconstruction |
publishDate |
2016 |
url |
http://ndltd.ncl.edu.tw/handle/71426617509233286477 |
work_keys_str_mv |
AT tsungichen usingmfitoimproveegomotionestimationtowardsmoreaccurate3dreconstruction AT chénzōngyì usingmfitoimproveegomotionestimationtowardsmoreaccurate3dreconstruction AT tsungichen cǎiyòngmfigǎiliángběntǐyídònggūcèyǐtíshēngsānwéizhòngjiànzhǔnquèdùzhīfāngfǎ AT chénzōngyì cǎiyòngmfigǎiliángběntǐyídònggūcèyǐtíshēngsānwéizhòngjiànzhǔnquèdùzhīfāngfǎ |
_version_ |
1718540319923896320 |