A Vision-Based Robot Navigation System and Its Applications

碩士 === 國立中央大學 === 資訊工程研究所 === 94 === Robot localization has been a very challenging task in mobile robotics since it in essential for a broad range of mobile robot tasks. This thesis proposes a new vision-based robot localization and map-building algorithm. Via the proposed algorithm, a robot can au...

Full description

Bibliographic Details
Main Authors: Yu-Chen Chiang, 蔣育承
Other Authors: Mu-Chun Su
Format: Others
Language:zh-TW
Published: 2006
Online Access:http://ndltd.ncl.edu.tw/handle/t7x6rf
id ndltd-TW-094NCU05392113
record_format oai_dc
spelling ndltd-TW-094NCU053921132018-05-16T04:25:32Z http://ndltd.ncl.edu.tw/handle/t7x6rf A Vision-Based Robot Navigation System and Its Applications 以視覺為基礎之機器人導航及應用 Yu-Chen Chiang 蔣育承 碩士 國立中央大學 資訊工程研究所 94 Robot localization has been a very challenging task in mobile robotics since it in essential for a broad range of mobile robot tasks. This thesis proposes a new vision-based robot localization and map-building algorithm. Via the proposed algorithm, a robot can automatically patrol the environment whose environment information has been learned or plan a shortest path to visit some particular locations pre-specifies by the user. To learn a new environment, a robot must first proceed to the exploration procedure (EP). In EP, a robot uses its vision and an infrared sensor to build a map of the unknown environment. The map is represented as a graph which consists of vertexes and edge. When a robot in navigating, a vertex is generated whenever a distinct environment (e.g. intersections, blind alleys, etc.) is detected and an edge is used to connect these vertex. At each vertex or particular location, images of the environment will be stared in a training data set. After the robot have finished the navigation tour and go back to the original starting position, a two-layer perceptron is trained to memorize the environment using the collected training data set. After the robot has built the environment map after the end of the EP, it enters the operation procedure (OP). In OP, the robot may automatically patrol the environment and transmit images to remote clients via a web browser or execute a particular patrolling task assigned by the user. During a navigation tour, the robot knows its location by contract a match between the observation and the expectation as derived from the database. The match is computed by feeding the observation to the trained MLP. Finally, the performance of the proposed algorithm is demonstrated by training a SONY AIBO to navigate a home environment. Mu-Chun Su 蘇木春 2006 學位論文 ; thesis 70 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立中央大學 === 資訊工程研究所 === 94 === Robot localization has been a very challenging task in mobile robotics since it in essential for a broad range of mobile robot tasks. This thesis proposes a new vision-based robot localization and map-building algorithm. Via the proposed algorithm, a robot can automatically patrol the environment whose environment information has been learned or plan a shortest path to visit some particular locations pre-specifies by the user. To learn a new environment, a robot must first proceed to the exploration procedure (EP). In EP, a robot uses its vision and an infrared sensor to build a map of the unknown environment. The map is represented as a graph which consists of vertexes and edge. When a robot in navigating, a vertex is generated whenever a distinct environment (e.g. intersections, blind alleys, etc.) is detected and an edge is used to connect these vertex. At each vertex or particular location, images of the environment will be stared in a training data set. After the robot have finished the navigation tour and go back to the original starting position, a two-layer perceptron is trained to memorize the environment using the collected training data set. After the robot has built the environment map after the end of the EP, it enters the operation procedure (OP). In OP, the robot may automatically patrol the environment and transmit images to remote clients via a web browser or execute a particular patrolling task assigned by the user. During a navigation tour, the robot knows its location by contract a match between the observation and the expectation as derived from the database. The match is computed by feeding the observation to the trained MLP. Finally, the performance of the proposed algorithm is demonstrated by training a SONY AIBO to navigate a home environment.
author2 Mu-Chun Su
author_facet Mu-Chun Su
Yu-Chen Chiang
蔣育承
author Yu-Chen Chiang
蔣育承
spellingShingle Yu-Chen Chiang
蔣育承
A Vision-Based Robot Navigation System and Its Applications
author_sort Yu-Chen Chiang
title A Vision-Based Robot Navigation System and Its Applications
title_short A Vision-Based Robot Navigation System and Its Applications
title_full A Vision-Based Robot Navigation System and Its Applications
title_fullStr A Vision-Based Robot Navigation System and Its Applications
title_full_unstemmed A Vision-Based Robot Navigation System and Its Applications
title_sort vision-based robot navigation system and its applications
publishDate 2006
url http://ndltd.ncl.edu.tw/handle/t7x6rf
work_keys_str_mv AT yuchenchiang avisionbasedrobotnavigationsystemanditsapplications
AT jiǎngyùchéng avisionbasedrobotnavigationsystemanditsapplications
AT yuchenchiang yǐshìjuéwèijīchǔzhījīqìréndǎohángjíyīngyòng
AT jiǎngyùchéng yǐshìjuéwèijīchǔzhījīqìréndǎohángjíyīngyòng
AT yuchenchiang visionbasedrobotnavigationsystemanditsapplications
AT jiǎngyùchéng visionbasedrobotnavigationsystemanditsapplications
_version_ 1718639881168617472