In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability

碩士 === 國立交通大學 === 資訊科學與工程研究所 === 102 === In this study, an augmented-reality based in-car tour guidance system with an automatic learning capability for use in outdoor park areas using computer vision techniques has been proposed. With the proposed system, a user can construct a tour guidance map fo...

Full description

Bibliographic Details
Main Authors: Tang, Hsin-Jun, 唐心駿
Other Authors: Tsai, Wen-Hsiang
Format: Others
Language:en_US
Published: 2014
Online Access:http://ndltd.ncl.edu.tw/handle/71962519957595476371
id ndltd-TW-102NCTU5394127
record_format oai_dc
spelling ndltd-TW-102NCTU53941272015-10-14T00:18:37Z http://ndltd.ncl.edu.tw/handle/71962519957595476371 In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability 運用擴增實境及環場影像技術實做具有自動學習能力的戶外園區車內導覽系統 Tang, Hsin-Jun 唐心駿 碩士 國立交通大學 資訊科學與工程研究所 102 In this study, an augmented-reality based in-car tour guidance system with an automatic learning capability for use in outdoor park areas using computer vision techniques has been proposed. With the proposed system, a user can construct a tour guidance map for a park area in a simple and clear way, and use this map to provide tour guidance information to in-car passengers. When a passenger is in a vehicle driven in a park area, he/she can get from the system tour guidance information mainly about the names of the nearby buildings appearing along the way on the guidance path. The building names are augmented on the passenger-view image which is displayed on the mobile device held by the passenger. To implement the proposed system with the above-mentioned capability, at first an environment map is generated in the learning phase, which includes the information about the tour path and the along-path buildings (mainly the building names). All the data are learned either manually or by programs, and saved into the database for use in the navigation phase. Secondly, a method for automatic learning of the along-path vertical-line features, mainly, the edges of light poles, is proposed for use by the system. In this feature-learning stage, the vehicle equipped with a GPS device and a two-camera omni-imaging device is driven on a pre-selected guidance path. On each visited spot of the path, the system analyzes the input omni-image pair taken by the upper and lower cameras of the imaging device respectively, to detect the nearby vertical-line features and compute the positions and heights of them by the use of the GPS device. And the learned features are added to the map as landmarks for vehicle localization in the navigation phase. Next, a method for vehicle localization is proposed for use by the system. The method analyzes the omni-image taken by the upper camera of the imaging device to detect the learned features by the use of the learned information about them and the GPS device. It then computes the vehicle position by using the relation between the features and the vehicle. Finally, a method for AR-based guidance is proposed, which at first generates a passenger-view image by transforming the omni-image acquired from the upper omni-camera onto the user’s mobile-device screen. The method then uses the passenger-view image as a base, and augments the building names on the image before the image is displayed. To accomplish this function, the system computes the position of each building on the passenger-view image by using the result of vehicle localization. Good experimental results are also presented to show the feasibility of the proposed methods for real applications. Tsai, Wen-Hsiang 蔡文祥 2014 學位論文 ; thesis 102 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立交通大學 === 資訊科學與工程研究所 === 102 === In this study, an augmented-reality based in-car tour guidance system with an automatic learning capability for use in outdoor park areas using computer vision techniques has been proposed. With the proposed system, a user can construct a tour guidance map for a park area in a simple and clear way, and use this map to provide tour guidance information to in-car passengers. When a passenger is in a vehicle driven in a park area, he/she can get from the system tour guidance information mainly about the names of the nearby buildings appearing along the way on the guidance path. The building names are augmented on the passenger-view image which is displayed on the mobile device held by the passenger. To implement the proposed system with the above-mentioned capability, at first an environment map is generated in the learning phase, which includes the information about the tour path and the along-path buildings (mainly the building names). All the data are learned either manually or by programs, and saved into the database for use in the navigation phase. Secondly, a method for automatic learning of the along-path vertical-line features, mainly, the edges of light poles, is proposed for use by the system. In this feature-learning stage, the vehicle equipped with a GPS device and a two-camera omni-imaging device is driven on a pre-selected guidance path. On each visited spot of the path, the system analyzes the input omni-image pair taken by the upper and lower cameras of the imaging device respectively, to detect the nearby vertical-line features and compute the positions and heights of them by the use of the GPS device. And the learned features are added to the map as landmarks for vehicle localization in the navigation phase. Next, a method for vehicle localization is proposed for use by the system. The method analyzes the omni-image taken by the upper camera of the imaging device to detect the learned features by the use of the learned information about them and the GPS device. It then computes the vehicle position by using the relation between the features and the vehicle. Finally, a method for AR-based guidance is proposed, which at first generates a passenger-view image by transforming the omni-image acquired from the upper omni-camera onto the user’s mobile-device screen. The method then uses the passenger-view image as a base, and augments the building names on the image before the image is displayed. To accomplish this function, the system computes the position of each building on the passenger-view image by using the result of vehicle localization. Good experimental results are also presented to show the feasibility of the proposed methods for real applications.
author2 Tsai, Wen-Hsiang
author_facet Tsai, Wen-Hsiang
Tang, Hsin-Jun
唐心駿
author Tang, Hsin-Jun
唐心駿
spellingShingle Tang, Hsin-Jun
唐心駿
In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability
author_sort Tang, Hsin-Jun
title In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability
title_short In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability
title_full In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability
title_fullStr In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability
title_full_unstemmed In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability
title_sort in-car tour guidance in outdoor parks using augmented reality and omni-vision techniques with an automatic learning capability
publishDate 2014
url http://ndltd.ncl.edu.tw/handle/71962519957595476371
work_keys_str_mv AT tanghsinjun incartourguidanceinoutdoorparksusingaugmentedrealityandomnivisiontechniqueswithanautomaticlearningcapability
AT tángxīnjùn incartourguidanceinoutdoorparksusingaugmentedrealityandomnivisiontechniqueswithanautomaticlearningcapability
AT tanghsinjun yùnyòngkuòzēngshíjìngjíhuánchǎngyǐngxiàngjìshùshízuòjùyǒuzìdòngxuéxínénglìdehùwàiyuánqūchēnèidǎolǎnxìtǒng
AT tángxīnjùn yùnyòngkuòzēngshíjìngjíhuánchǎngyǐngxiàngjìshùshízuòjùyǒuzìdòngxuéxínénglìdehùwàiyuánqūchēnèidǎolǎnxìtǒng
_version_ 1718088714168565760