基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取

碩士 === 國立中正大學 === 電機工程研究所 === 107 === In the automatic control, the use of computer vision technology to detect and identify objects and then operate them by robotic arms is a main core technology. In robotic manipulation, one of the fields is visual servo control. The combination of computer vision...

Full description

Bibliographic Details
Main Authors: SUN, GUO-JHEN, 孫國禎
Other Authors: LIN, HUEI-YUNG
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/kr9qd2
id ndltd-TW-107CCU00442069
record_format oai_dc
spelling ndltd-TW-107CCU004420692019-11-02T05:27:06Z http://ndltd.ncl.edu.tw/handle/kr9qd2 基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取 SUN, GUO-JHEN 孫國禎 碩士 國立中正大學 電機工程研究所 107 In the automatic control, the use of computer vision technology to detect and identify objects and then operate them by robotic arms is a main core technology. In robotic manipulation, one of the fields is visual servo control. The combination of computer vision and automatic control will be the main core technology of automation factories or fully automatic stores. In this thesis, we propose a two-dimensional and three-dimensional information combined with computer vision through a robotic arm. Based on two-dimensional information, we use machine learning to identify specific objects and obtain pixel-level segmentation mask results. We use the basic geometric model to estimate the pose of the object instead of preconstructing the accurate CAD model to select grasping points. By using computer vision data to control the robot arm, we use the eye-in-hand architecture to obtained visual data from a camera mounted directly on the robot arm, enabling the arm to be positioned in a real environment to complete the whole robotic gripping system. In the experiment, the object segmentation results will be displayed and use the basic geometric object matching to find the object grasping points, and perform hand-eye calibration to find the actual object position to perform the grasp work. LIN, HUEI-YUNG 林惠勇 2019 學位論文 ; thesis 56 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立中正大學 === 電機工程研究所 === 107 === In the automatic control, the use of computer vision technology to detect and identify objects and then operate them by robotic arms is a main core technology. In robotic manipulation, one of the fields is visual servo control. The combination of computer vision and automatic control will be the main core technology of automation factories or fully automatic stores. In this thesis, we propose a two-dimensional and three-dimensional information combined with computer vision through a robotic arm. Based on two-dimensional information, we use machine learning to identify specific objects and obtain pixel-level segmentation mask results. We use the basic geometric model to estimate the pose of the object instead of preconstructing the accurate CAD model to select grasping points. By using computer vision data to control the robot arm, we use the eye-in-hand architecture to obtained visual data from a camera mounted directly on the robot arm, enabling the arm to be positioned in a real environment to complete the whole robotic gripping system. In the experiment, the object segmentation results will be displayed and use the basic geometric object matching to find the object grasping points, and perform hand-eye calibration to find the actual object position to perform the grasp work.
author2 LIN, HUEI-YUNG
author_facet LIN, HUEI-YUNG
SUN, GUO-JHEN
孫國禎
author SUN, GUO-JHEN
孫國禎
spellingShingle SUN, GUO-JHEN
孫國禎
基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取
author_sort SUN, GUO-JHEN
title 基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取
title_short 基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取
title_full 基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取
title_fullStr 基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取
title_full_unstemmed 基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取
title_sort 基於機器學習之語義分割及幾何建模配對進行物件姿態估測及夾取
publishDate 2019
url http://ndltd.ncl.edu.tw/handle/kr9qd2
work_keys_str_mv AT sunguojhen jīyújīqìxuéxízhīyǔyìfēngējíjǐhéjiànmópèiduìjìnxíngwùjiànzītàigūcèjíjiāqǔ
AT sūnguózhēn jīyújīqìxuéxízhīyǔyìfēngējíjǐhéjiànmópèiduìjìnxíngwùjiànzītàigūcèjíjiāqǔ
_version_ 1719285362340134912