Establishing Ego-motion Estimation from Monocular UAV Image Sequences

碩士 === 國立高雄大學 === 資訊工程學系碩士班 === 105 === In the past decades, 3D related technologies have undergone significant development. Most of the 3D acquisition systems, however, lack self-localization abilities and may face difficulties in the reconstruction of a large scale 3D environment. In this work, we...

Full description

Bibliographic Details
Main Authors: HUANG,TING-HSIANG, 黃鼎翔
Other Authors: CHEN,CHIA-YEN
Format: Others
Language:zh-TW
Published: 2016
Online Access:http://ndltd.ncl.edu.tw/handle/hhxyby
Description
Summary:碩士 === 國立高雄大學 === 資訊工程學系碩士班 === 105 === In the past decades, 3D related technologies have undergone significant development. Most of the 3D acquisition systems, however, lack self-localization abilities and may face difficulties in the reconstruction of a large scale 3D environment. In this work, we investigate the application of monocular vision in ego-motion estimation. In particular, with respect to images acquired by a single camera mounted upon an Unmanned Aerial Vehicle. In the proposed system, we first estimate the camera parameters from the acquired images. The images are then calibrated and registered. Features within images are detected by Speed Up Robust Features, and spatial feature matching is performed. The Semi-Global Matching method is selected for spatial feature matching to maintain the quality and number of matched features. The features are then combined with the camera parameters to calculate the 3D coordinates of the feature points, as well as the temporal features' rotation and translation variations. Finally, the ego-motion of the system is acquired by optimizing the obtained movements. Experiments are performed to demonstrate the applicability of the proposed method, in particular, towards the 3D reconstruction from aerial images.