Summary: | 碩士 === 國立臺灣科技大學 === 機械工程系 === 106 === The current thesis proposes a method for anticipating the motion of a commercial articulated robot arm using multiple 3D cameras, in this case two Kinect v2 devices that sense the robotic arm simultaneously. 3D cameras retrieve color and depth information in the form of organized point clouds. Each Kinect v2 device is located on an unknown position and orientation relative to the robotic arm base, therefore a robot-world calibration method for each Kinect V2 device based on QR markers and the Singular Value Decomposition (SVD) algorithm is applied. This calibration method results in a unique coordinate system for whole the robotic arm and all 3D cameras. The robotic arm used in this thesis work is not capable of sending its end-effector pose and joint angles to a personal computer during online operation. Instead, an offline approach based on image and pointcloud processing and the SVD algorithm helps to solve this problem. In this offline approach, the robotic arm flange holds an attachment of four colored reduced-size balls. During robot’s motion, Kinect V2 devices generate time measured organized pointcloud frames of the robotic arms and its environment. Each recorded pointcloud frame contains 3D and RGB information of the colored balls. Applying image and pointcloud processing helps to find the position of the center of each colored ball in 3D space, which represent a set of 3D points. The SVD algorithm uses this set of points to find the end-effector pose relative to the robotic arm base coordinate system at a particular point cloud frame recorded time. The joint angles for each end-effector pose come from the application of Inverse Kinematics. All this data takes the form of a time history of end-effector joint angles, ideal for interpolation, in case of an unknown end-effector joint angles configuration at a different time. With this time history, Kinect V2 devices sense the robotic arm motion online.
|