Pose Control of Mobile Robots for Vision-Guided Material Grasping
碩士 === 國立成功大學 === 機械工程學系碩博士班 === 90 === Abstract Mobile robots frequently replace humans in handling and transporting wafer carriers in semiconductor production lines. The constructed mobile robot is primarily composed of a mobile base, a robot manipulator, and a visual system. This flexible materia...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2002
|
Online Access: | http://ndltd.ncl.edu.tw/handle/60648860197434949207 |
id |
ndltd-TW-090NCKU5490153 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-090NCKU54901532015-10-13T12:46:50Z http://ndltd.ncl.edu.tw/handle/60648860197434949207 Pose Control of Mobile Robots for Vision-Guided Material Grasping 視覺導引物件抓取之自走型機器人之姿態控制 Mau-Hsiung Hsu 許貿雄 碩士 國立成功大學 機械工程學系碩博士班 90 Abstract Mobile robots frequently replace humans in handling and transporting wafer carriers in semiconductor production lines. The constructed mobile robot is primarily composed of a mobile base, a robot manipulator, and a visual system. This flexible material transfer system can save the expense of human resources, as well as provide reliable and efficient transportation and handling. During pick-and-place operations between a predefined station and the mobile robot, position and orientation errors of the mobile base are inevitably caused by the guidance control system. Thus, this study employs the eye-in-hand vision system to provide visual information for controlling the manipulator of the mobile robot to grasp accurately stationary material. This work further presents a position-based look-and-move, task encoding control strategy for eye-in-hand vision architecture, that maintains all target features in the camera’s field of view throughout the visual guiding. Moreover, the manipulator can quickly approach the material and precisely position the end-effector in the desired pose. Numerous techniques are required for implementing such a task, including image enhancement, edge detection, corner and centroid detection, camera model calibration method, robotic hand/eye calibration method, using a camera with controlled zoom and focus, and task encoding scheme. Finally, these technologies are experimentally applied to realize a manipulator that can quickly approach a target object and precisely position its end-effector in the desired relative pose to the object, independently of where the target object is located on a station. Specific experimental demonstrations include grasping the target object with different locations on the station and grasping the target object tilted by different angles to the station. Tsing-Iuan Tsay 蔡清元 2002 學位論文 ; thesis 93 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立成功大學 === 機械工程學系碩博士班 === 90 === Abstract
Mobile robots frequently replace humans in handling and transporting wafer carriers in semiconductor production lines. The constructed mobile robot is primarily composed of a mobile base, a robot manipulator, and a visual system. This flexible material transfer system can save the expense of human resources, as well as provide reliable and efficient transportation and handling.
During pick-and-place operations between a predefined station and the mobile robot, position and orientation errors of the mobile base are inevitably caused by the guidance control system. Thus, this study employs the eye-in-hand vision system to provide visual information for controlling the manipulator of the mobile robot to grasp accurately stationary material. This work further presents a position-based look-and-move, task encoding control strategy for eye-in-hand vision architecture, that maintains all target features in the camera’s field of view throughout the visual guiding. Moreover, the manipulator can quickly approach the material and precisely position the end-effector in the desired pose. Numerous techniques are required for implementing such a task, including image enhancement, edge detection, corner and centroid detection, camera model calibration method, robotic hand/eye calibration method, using a camera with controlled zoom and focus, and task encoding scheme.
Finally, these technologies are experimentally applied to realize a manipulator that can quickly approach a target object and precisely position its end-effector in the desired relative pose to the object, independently of where the target object is located on a station. Specific experimental demonstrations include grasping the target object with different locations on the station and grasping the target object tilted by different angles to the station.
|
author2 |
Tsing-Iuan Tsay |
author_facet |
Tsing-Iuan Tsay Mau-Hsiung Hsu 許貿雄 |
author |
Mau-Hsiung Hsu 許貿雄 |
spellingShingle |
Mau-Hsiung Hsu 許貿雄 Pose Control of Mobile Robots for Vision-Guided Material Grasping |
author_sort |
Mau-Hsiung Hsu |
title |
Pose Control of Mobile Robots for Vision-Guided Material Grasping |
title_short |
Pose Control of Mobile Robots for Vision-Guided Material Grasping |
title_full |
Pose Control of Mobile Robots for Vision-Guided Material Grasping |
title_fullStr |
Pose Control of Mobile Robots for Vision-Guided Material Grasping |
title_full_unstemmed |
Pose Control of Mobile Robots for Vision-Guided Material Grasping |
title_sort |
pose control of mobile robots for vision-guided material grasping |
publishDate |
2002 |
url |
http://ndltd.ncl.edu.tw/handle/60648860197434949207 |
work_keys_str_mv |
AT mauhsiunghsu posecontrolofmobilerobotsforvisionguidedmaterialgrasping AT xǔmàoxióng posecontrolofmobilerobotsforvisionguidedmaterialgrasping AT mauhsiunghsu shìjuédǎoyǐnwùjiànzhuāqǔzhīzìzǒuxíngjīqìrénzhīzītàikòngzhì AT xǔmàoxióng shìjuédǎoyǐnwùjiànzhuāqǔzhīzìzǒuxíngjīqìrénzhīzītàikòngzhì |
_version_ |
1716865468819570688 |