3D Vision Based Robot Manipulator Timed Trajectory Generation System

碩士 === 國立臺灣科技大學 === 機械工程系 === 106 === The current thesis proposes a method for anticipating the motion of a commercial articulated robot arm using multiple 3D cameras, in this case two Kinect v2 devices that sense the robotic arm simultaneously. 3D cameras retrieve color and depth information in the...

Full description

Bibliographic Details
Main Author: Sergio Omar Chong Lugo
Other Authors: Chyi-Yeu Lin
Format: Others
Language:en_US
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/2p7ts5
id ndltd-TW-106NTUS5489077
record_format oai_dc
spelling ndltd-TW-106NTUS54890772019-05-16T00:59:40Z http://ndltd.ncl.edu.tw/handle/2p7ts5 3D Vision Based Robot Manipulator Timed Trajectory Generation System 3D視覺為基之機器手臂含時間軌跡建立系統 Sergio Omar Chong Lugo Sergio Omar Chong Lugo 碩士 國立臺灣科技大學 機械工程系 106 The current thesis proposes a method for anticipating the motion of a commercial articulated robot arm using multiple 3D cameras, in this case two Kinect v2 devices that sense the robotic arm simultaneously. 3D cameras retrieve color and depth information in the form of organized point clouds. Each Kinect v2 device is located on an unknown position and orientation relative to the robotic arm base, therefore a robot-world calibration method for each Kinect V2 device based on QR markers and the Singular Value Decomposition (SVD) algorithm is applied. This calibration method results in a unique coordinate system for whole the robotic arm and all 3D cameras. The robotic arm used in this thesis work is not capable of sending its end-effector pose and joint angles to a personal computer during online operation. Instead, an offline approach based on image and pointcloud processing and the SVD algorithm helps to solve this problem. In this offline approach, the robotic arm flange holds an attachment of four colored reduced-size balls. During robot’s motion, Kinect V2 devices generate time measured organized pointcloud frames of the robotic arms and its environment. Each recorded pointcloud frame contains 3D and RGB information of the colored balls. Applying image and pointcloud processing helps to find the position of the center of each colored ball in 3D space, which represent a set of 3D points. The SVD algorithm uses this set of points to find the end-effector pose relative to the robotic arm base coordinate system at a particular point cloud frame recorded time. The joint angles for each end-effector pose come from the application of Inverse Kinematics. All this data takes the form of a time history of end-effector joint angles, ideal for interpolation, in case of an unknown end-effector joint angles configuration at a different time. With this time history, Kinect V2 devices sense the robotic arm motion online. Chyi-Yeu Lin 林其禹 2018 學位論文 ; thesis 70 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立臺灣科技大學 === 機械工程系 === 106 === The current thesis proposes a method for anticipating the motion of a commercial articulated robot arm using multiple 3D cameras, in this case two Kinect v2 devices that sense the robotic arm simultaneously. 3D cameras retrieve color and depth information in the form of organized point clouds. Each Kinect v2 device is located on an unknown position and orientation relative to the robotic arm base, therefore a robot-world calibration method for each Kinect V2 device based on QR markers and the Singular Value Decomposition (SVD) algorithm is applied. This calibration method results in a unique coordinate system for whole the robotic arm and all 3D cameras. The robotic arm used in this thesis work is not capable of sending its end-effector pose and joint angles to a personal computer during online operation. Instead, an offline approach based on image and pointcloud processing and the SVD algorithm helps to solve this problem. In this offline approach, the robotic arm flange holds an attachment of four colored reduced-size balls. During robot’s motion, Kinect V2 devices generate time measured organized pointcloud frames of the robotic arms and its environment. Each recorded pointcloud frame contains 3D and RGB information of the colored balls. Applying image and pointcloud processing helps to find the position of the center of each colored ball in 3D space, which represent a set of 3D points. The SVD algorithm uses this set of points to find the end-effector pose relative to the robotic arm base coordinate system at a particular point cloud frame recorded time. The joint angles for each end-effector pose come from the application of Inverse Kinematics. All this data takes the form of a time history of end-effector joint angles, ideal for interpolation, in case of an unknown end-effector joint angles configuration at a different time. With this time history, Kinect V2 devices sense the robotic arm motion online.
author2 Chyi-Yeu Lin
author_facet Chyi-Yeu Lin
Sergio Omar Chong Lugo
Sergio Omar Chong Lugo
author Sergio Omar Chong Lugo
Sergio Omar Chong Lugo
spellingShingle Sergio Omar Chong Lugo
Sergio Omar Chong Lugo
3D Vision Based Robot Manipulator Timed Trajectory Generation System
author_sort Sergio Omar Chong Lugo
title 3D Vision Based Robot Manipulator Timed Trajectory Generation System
title_short 3D Vision Based Robot Manipulator Timed Trajectory Generation System
title_full 3D Vision Based Robot Manipulator Timed Trajectory Generation System
title_fullStr 3D Vision Based Robot Manipulator Timed Trajectory Generation System
title_full_unstemmed 3D Vision Based Robot Manipulator Timed Trajectory Generation System
title_sort 3d vision based robot manipulator timed trajectory generation system
publishDate 2018
url http://ndltd.ncl.edu.tw/handle/2p7ts5
work_keys_str_mv AT sergioomarchonglugo 3dvisionbasedrobotmanipulatortimedtrajectorygenerationsystem
AT sergioomarchonglugo 3dvisionbasedrobotmanipulatortimedtrajectorygenerationsystem
AT sergioomarchonglugo 3dshìjuéwèijīzhījīqìshǒubìhánshíjiānguǐjījiànlìxìtǒng
AT sergioomarchonglugo 3dshìjuéwèijīzhījīqìshǒubìhánshíjiānguǐjījiànlìxìtǒng
_version_ 1719172477072965632