In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability

碩士 === 國立交通大學 === 資訊科學與工程研究所 === 102 === In this study, an augmented-reality based in-car tour guidance system with an automatic learning capability for use in outdoor park areas using computer vision techniques has been proposed. With the proposed system, a user can construct a tour guidance map fo...

Full description

Bibliographic Details
Main Authors: Tang, Hsin-Jun, 唐心駿
Other Authors: Tsai, Wen-Hsiang
Format: Others
Language:en_US
Published: 2014
Online Access:http://ndltd.ncl.edu.tw/handle/71962519957595476371
Description
Summary:碩士 === 國立交通大學 === 資訊科學與工程研究所 === 102 === In this study, an augmented-reality based in-car tour guidance system with an automatic learning capability for use in outdoor park areas using computer vision techniques has been proposed. With the proposed system, a user can construct a tour guidance map for a park area in a simple and clear way, and use this map to provide tour guidance information to in-car passengers. When a passenger is in a vehicle driven in a park area, he/she can get from the system tour guidance information mainly about the names of the nearby buildings appearing along the way on the guidance path. The building names are augmented on the passenger-view image which is displayed on the mobile device held by the passenger. To implement the proposed system with the above-mentioned capability, at first an environment map is generated in the learning phase, which includes the information about the tour path and the along-path buildings (mainly the building names). All the data are learned either manually or by programs, and saved into the database for use in the navigation phase. Secondly, a method for automatic learning of the along-path vertical-line features, mainly, the edges of light poles, is proposed for use by the system. In this feature-learning stage, the vehicle equipped with a GPS device and a two-camera omni-imaging device is driven on a pre-selected guidance path. On each visited spot of the path, the system analyzes the input omni-image pair taken by the upper and lower cameras of the imaging device respectively, to detect the nearby vertical-line features and compute the positions and heights of them by the use of the GPS device. And the learned features are added to the map as landmarks for vehicle localization in the navigation phase. Next, a method for vehicle localization is proposed for use by the system. The method analyzes the omni-image taken by the upper camera of the imaging device to detect the learned features by the use of the learned information about them and the GPS device. It then computes the vehicle position by using the relation between the features and the vehicle. Finally, a method for AR-based guidance is proposed, which at first generates a passenger-view image by transforming the omni-image acquired from the upper omni-camera onto the user’s mobile-device screen. The method then uses the passenger-view image as a base, and augments the building names on the image before the image is displayed. To accomplish this function, the system computes the position of each building on the passenger-view image by using the result of vehicle localization. Good experimental results are also presented to show the feasibility of the proposed methods for real applications.