A Tracking Method for Markerless Augmented Reality System Based on Color and Feature Points

碩士 === 國立臺北科技大學 === 資訊工程系研究所 === 104 === Augmented Reality (AR) is a technology that combine the real-time reality streaming with 3D virtual model, and has been applied to many different area. AR technology’s presentation need not only 3D virtual model information and camera streaming, but some refe...

Full description

Bibliographic Details
Main Authors: shen shao-wei, 沈紹偉
Other Authors: Chueh-Wei Chang
Format: Others
Published: 2016
Online Access:http://ndltd.ncl.edu.tw/handle/94m3x8
Description
Summary:碩士 === 國立臺北科技大學 === 資訊工程系研究所 === 104 === Augmented Reality (AR) is a technology that combine the real-time reality streaming with 3D virtual model, and has been applied to many different area. AR technology’s presentation need not only 3D virtual model information and camera streaming, but some reference information when we Render a 3D virtual model in a scene frame, which is called target information. Augment Reality system must run and be presented by recognize the target in the scene frame depend on the target information in real-time streaming, so it is important to pay attention to control the calculation which spend on target recognition and tracking, especially for markerless AR, otherwise it will lead to some issues, such as delay of the target tracking or unsmoothed when we render a 3D virtual model. Because of the problem mentioned above, we provide a detection and tracking method which is applied to markerless augment reality system. By using color information of target image, we detect the target in the scene frame of the camera streaming, and filter the region which the target is most probably appear in, then get the feature points match of the region with target information, instead of match hole scene frame with target. Furthermore, in order to reduce calculation and improve the fluency, set the tracking region base on current target position in frame, for next frame’s match, and we make a judgement that if we have to recalculate the camera position by the difference between the target position of current frame and last frame, further reduce the calculation of overall system. According to the experiment result, compare with similar method, overall we can reduce about 60% calculation time.