Summary: | 碩士 === 國立清華大學 === 電機工程學系 === 104 === As the rapid development of drone applications in many field, an accurate method providing positioning is essential. In this thesis, we design a vision-based space positioning system, including the space model construction and positioning. In the space model construction, we use Structure from Motion (SfM) model, which create model after collecting images, and software VisualSFM. In the positioning, we use Fast Library for Approximate Nearest Neighbors (FLANN) to select the inlier points and use Perspective-n-Point (PnP) algorithm to estimate the position of camera. For processing the features of images, three algorithms, Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and Features from Accelerated Segment Test (FAST) + Fast Retina Keypoint (FREAK) are used to compare their capabilities. Totally four different scenes are used in our vision-based positioning experiment. Checking points are selected to examine the capabilities of three algorithms with its positioning error in centimeter. Result from each algorithm are discussed and compared based on the processing time and positioning accuracy. Using 10 centimeter as threshold, the experiment result from four scenes show that SIFT could successful positioning at least 83% of the whole space, and SURF could successful positioning at least 65% of the whole space, and FAST + FREAK could successful positioning at least 60% of the whole space.
|