Robotic manipulation based on visual and tactile perception

We still struggle to deliver autonomous robots that perform manipulation tasks as simple for a human as picking up items. A portion of the difficulty of this task lays on the fact that such operation requires a robot that can deal with uncertainty in an unstructured environment. We propose in this t...

Full description

Bibliographic Details
Main Author: Zapata-Impata, Brayan S.
Other Authors: Gil, Pablo
Format: Doctoral Thesis
Language:English
Published: Universidad de Alicante 2021
Subjects:
Online Access:http://hdl.handle.net/10045/118217
id ndltd-ua.es-oai-rua.ua.es-10045-118217
record_format oai_dc
spelling ndltd-ua.es-oai-rua.ua.es-10045-1182172021-10-01T17:15:23Z Robotic manipulation based on visual and tactile perception Zapata-Impata, Brayan S. Gil, Pablo Universidad de Alicante. Instituto Universitario de Investigación Informática Robotic manipulation Robotic grasping Computer vision Tactile perception Machine learning Deep learning Ingeniería de Sistemas y Automática We still struggle to deliver autonomous robots that perform manipulation tasks as simple for a human as picking up items. A portion of the difficulty of this task lays on the fact that such operation requires a robot that can deal with uncertainty in an unstructured environment. We propose in this thesis the use of visual and tactile perception for providing solutions that can improve the robustness of a robotic manipulator in such environment. In this thesis, we approach robotic grasping using a single 3D point cloud with a partial view of the objects present in the scene. Moreover, the objects are unknown: they have not been previously recognised and we do not have a 3D model to compute candidate grasping points. In experimentation, we prove that our solution is fast and robust, taking in average 17 ms to find a grasp which is stable 85% of the time. Tactile sensors provide a rich source of information regarding the contact experienced by a robotic hand during the manipulation of an object. In this thesis, we exploit with deep learning this type of data for approaching the prediction of the stability of a grasp and the detection of the direction of slip of a contacted object. We prove that our solutions could correctly predict stability 76% of the time with a single tactile reading. We also demonstrate that learning temporal and spatial patterns leads to detections of the direction of slip which are correct up to 82% of the time and are only delayed 50 ms after the actual slip event begins. Despite the good results achieved on the previous two tactile tasks, this data modality has a serious flaw: it can only be registered during contact. In contrast, humans can estimate the feeling of grasping an object just by looking at it. Inspired by this, we present in this thesis our contributions for learning to generate tactile responses from vision. We propose a supervised solution based on training a deep neural network that models the behaviour of a tactile sensor, given 3D visual information of the target object and grasp data as an input. As a result, our system has to learn to link vision to touch. We prove in experimentation that our system learns to generate tactile responses on a set of 12 items, being off by only 0.06 relative error points. Furthermore, we also experiment with a semi-supervised solution for learning this task with a reduced need of labelled data. In experimentation, we show that it learns our tactile data generation task with 50% less data than the supervised solution, incrementing only 17% the error. Last, we introduce our work in the generation of candidate grasps which are improved through simulation of the tactile responses they would generate. This work unifies the contributions presented in this thesis, as it applies modules on calculating grasps, stability prediction and tactile data generation. In early experimentation, it finds grasps which are more stable than the original ones produced by our method based on 3D point clouds. This doctoral thesis has been carried out with the support of the Spanish Ministry of Economy, Industry and Competitiveness through the grant BES-2016-078290. 2021-09-28T09:14:33Z 2021-09-28T09:14:33Z 2020 2020 2020-09-17 info:eu-repo/semantics/doctoralThesis http://hdl.handle.net/10045/118217 eng Licencia Creative Commons Reconocimiento-NoComercial-SinObraDerivada 4.0 info:eu-repo/semantics/openAccess Universidad de Alicante
collection NDLTD
language English
format Doctoral Thesis
sources NDLTD
topic Robotic manipulation
Robotic grasping
Computer vision
Tactile perception
Machine learning
Deep learning
Ingeniería de Sistemas y Automática
spellingShingle Robotic manipulation
Robotic grasping
Computer vision
Tactile perception
Machine learning
Deep learning
Ingeniería de Sistemas y Automática
Zapata-Impata, Brayan S.
Robotic manipulation based on visual and tactile perception
description We still struggle to deliver autonomous robots that perform manipulation tasks as simple for a human as picking up items. A portion of the difficulty of this task lays on the fact that such operation requires a robot that can deal with uncertainty in an unstructured environment. We propose in this thesis the use of visual and tactile perception for providing solutions that can improve the robustness of a robotic manipulator in such environment. In this thesis, we approach robotic grasping using a single 3D point cloud with a partial view of the objects present in the scene. Moreover, the objects are unknown: they have not been previously recognised and we do not have a 3D model to compute candidate grasping points. In experimentation, we prove that our solution is fast and robust, taking in average 17 ms to find a grasp which is stable 85% of the time. Tactile sensors provide a rich source of information regarding the contact experienced by a robotic hand during the manipulation of an object. In this thesis, we exploit with deep learning this type of data for approaching the prediction of the stability of a grasp and the detection of the direction of slip of a contacted object. We prove that our solutions could correctly predict stability 76% of the time with a single tactile reading. We also demonstrate that learning temporal and spatial patterns leads to detections of the direction of slip which are correct up to 82% of the time and are only delayed 50 ms after the actual slip event begins. Despite the good results achieved on the previous two tactile tasks, this data modality has a serious flaw: it can only be registered during contact. In contrast, humans can estimate the feeling of grasping an object just by looking at it. Inspired by this, we present in this thesis our contributions for learning to generate tactile responses from vision. We propose a supervised solution based on training a deep neural network that models the behaviour of a tactile sensor, given 3D visual information of the target object and grasp data as an input. As a result, our system has to learn to link vision to touch. We prove in experimentation that our system learns to generate tactile responses on a set of 12 items, being off by only 0.06 relative error points. Furthermore, we also experiment with a semi-supervised solution for learning this task with a reduced need of labelled data. In experimentation, we show that it learns our tactile data generation task with 50% less data than the supervised solution, incrementing only 17% the error. Last, we introduce our work in the generation of candidate grasps which are improved through simulation of the tactile responses they would generate. This work unifies the contributions presented in this thesis, as it applies modules on calculating grasps, stability prediction and tactile data generation. In early experimentation, it finds grasps which are more stable than the original ones produced by our method based on 3D point clouds. === This doctoral thesis has been carried out with the support of the Spanish Ministry of Economy, Industry and Competitiveness through the grant BES-2016-078290.
author2 Gil, Pablo
author_facet Gil, Pablo
Zapata-Impata, Brayan S.
author Zapata-Impata, Brayan S.
author_sort Zapata-Impata, Brayan S.
title Robotic manipulation based on visual and tactile perception
title_short Robotic manipulation based on visual and tactile perception
title_full Robotic manipulation based on visual and tactile perception
title_fullStr Robotic manipulation based on visual and tactile perception
title_full_unstemmed Robotic manipulation based on visual and tactile perception
title_sort robotic manipulation based on visual and tactile perception
publisher Universidad de Alicante
publishDate 2021
url http://hdl.handle.net/10045/118217
work_keys_str_mv AT zapataimpatabrayans roboticmanipulationbasedonvisualandtactileperception
_version_ 1719486711898046464