Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired

碩士 === 國立交通大學 === 電控工程研究所 === 106 === Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, wher...

Full description

Bibliographic Details
Main Authors: Chen, Jih-Shi, 陳季希
Other Authors: Wang, Hsueh-Cheng
Format: Others
Language:zh-TW
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/djg74m
id ndltd-TW-106NCTU5449061
record_format oai_dc
spelling ndltd-TW-106NCTU54490612019-05-16T01:24:31Z http://ndltd.ncl.edu.tw/handle/djg74m Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired 為視覺障礙人士開發之深度追跡導盲機器犬 Chen, Jih-Shi 陳季希 碩士 國立交通大學 電控工程研究所 106 Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infras- tructures of tactile trails or guide dogs are expensive to maintain. To solve these problems, we develop a robotic guide dog, use yellow-blue line and Boston Freedom trails as following trails. In the previous works, camera observations may vary when man-made trails are deployed on various background textures under different illuminations or shadows. We proposed an au- tonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from the real world environments. The results show that our robotic guide dog may run in different background textures and illuminations. This thesis conducted user studies with 10 BVI users, who did not have prior experience with our system. Each participant was introduced to the tasks and signed a consent form. We also used a questionnaire to capture the users’ subjective experience. All 10 participants completed the questionnaires after the experiments, and all of then are able to build and draw a mental map. Wang, Hsueh-Cheng 王學誠 2018 學位論文 ; thesis 64 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立交通大學 === 電控工程研究所 === 106 === Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infras- tructures of tactile trails or guide dogs are expensive to maintain. To solve these problems, we develop a robotic guide dog, use yellow-blue line and Boston Freedom trails as following trails. In the previous works, camera observations may vary when man-made trails are deployed on various background textures under different illuminations or shadows. We proposed an au- tonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from the real world environments. The results show that our robotic guide dog may run in different background textures and illuminations. This thesis conducted user studies with 10 BVI users, who did not have prior experience with our system. Each participant was introduced to the tasks and signed a consent form. We also used a questionnaire to capture the users’ subjective experience. All 10 participants completed the questionnaires after the experiments, and all of then are able to build and draw a mental map.
author2 Wang, Hsueh-Cheng
author_facet Wang, Hsueh-Cheng
Chen, Jih-Shi
陳季希
author Chen, Jih-Shi
陳季希
spellingShingle Chen, Jih-Shi
陳季希
Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired
author_sort Chen, Jih-Shi
title Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired
title_short Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired
title_full Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired
title_fullStr Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired
title_full_unstemmed Deep Trail Following Robotic Guide Dog in Pedestrian Environments for People Who Are Blind and Visually Impaired
title_sort deep trail following robotic guide dog in pedestrian environments for people who are blind and visually impaired
publishDate 2018
url http://ndltd.ncl.edu.tw/handle/djg74m
work_keys_str_mv AT chenjihshi deeptrailfollowingroboticguidedoginpedestrianenvironmentsforpeoplewhoareblindandvisuallyimpaired
AT chénjìxī deeptrailfollowingroboticguidedoginpedestrianenvironmentsforpeoplewhoareblindandvisuallyimpaired
AT chenjihshi wèishìjuézhàngàirénshìkāifāzhīshēndùzhuījīdǎomángjīqìquǎn
AT chénjìxī wèishìjuézhàngàirénshìkāifāzhīshēndùzhuījīdǎomángjīqìquǎn
_version_ 1719175516322267136