A Delivering Robot Roaming Between Floors with Real-time Recognition Capability

碩士 === 國立成功大學 === 工程科學系碩博士班 === 101 === Nowadays, living in a busy society, people always hope there are assistants around to help them with tedious works. That highlights the necessity and importance of service robots. They not only can provide simple service and assistance, such as sending files,...

Full description

Bibliographic Details
Main Authors: Sheng-HsiangYuan, 袁聖翔
Other Authors: Tzone-I Wang
Format: Others
Language:zh-TW
Published: 2013
Online Access:http://ndltd.ncl.edu.tw/handle/08591855346314829319
Description
Summary:碩士 === 國立成功大學 === 工程科學系碩博士班 === 101 === Nowadays, living in a busy society, people always hope there are assistants around to help them with tedious works. That highlights the necessity and importance of service robots. They not only can provide simple service and assistance, such as sending files, disposing garbage, serving drinks, and etc., but also can help disabled people with daily life needs. Moreover, they can even serve in repetitive situations or dangerous environments. In order to achieve such various functions, a robot needs a robust manipulator, as well as real-time image recognition algorithms. To enable a service robot to work in a high building, which is norm these days, it is important also to make the robot able to roam between different floors. The aim of this study is to develop a service robot that can roam across floors by autonomously taking lifts for delivering various objects. Maps of environments and feature points of objects of images from different perspectives extracted by using SIFT algorithm are established beforehand. The robot is controlled by a human-computer interface (HCI) to inform the robot its missions and goals. After given a task, the robot load the map of the environment and the patterns of the target images first. Next, a planned path guides the robot to the destination. A camera mounted on the manipulator keeps searching the targeted object in the environment and extracting feature points of objects found in the input images to match with the feature points of the loaded patterns of the targeted object. Once found, the object can be grabbed for delivering.