Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition

Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience resea...

Full description

Bibliographic Details
Main Authors: Ghazal Rouhafzay, Ana-Maria Cretu, Pierre Payeur
Format: Article
Language:English
Published: MDPI AG 2021-12-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/1/113
id doaj-936a966ddcc648abbdd46ff54b957932
record_format Article
spelling doaj-936a966ddcc648abbdd46ff54b9579322020-12-28T00:00:13ZengMDPI AGSensors1424-82202021-12-012111311310.3390/s21010113Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object RecognitionGhazal Rouhafzay0Ana-Maria Cretu1Pierre Payeur2Department of Systems and Computer Engineering, University of Ottawa, Ottawa, ON K1N 6N5, CanadaDepartment of Computer Science and Engineering, Université du Québec en Outaouais, Gatineau, QC J8X 3X7, CanadaDepartment of Systems and Computer Engineering, University of Ottawa, Ottawa, ON K1N 6N5, CanadaTransfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.https://www.mdpi.com/1424-8220/21/1/1133D object recognitiontransfer learningmachine intelligenceconvolutional neural networkstactile sensorsforce-sensing resistor
collection DOAJ
language English
format Article
sources DOAJ
author Ghazal Rouhafzay
Ana-Maria Cretu
Pierre Payeur
spellingShingle Ghazal Rouhafzay
Ana-Maria Cretu
Pierre Payeur
Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
Sensors
3D object recognition
transfer learning
machine intelligence
convolutional neural networks
tactile sensors
force-sensing resistor
author_facet Ghazal Rouhafzay
Ana-Maria Cretu
Pierre Payeur
author_sort Ghazal Rouhafzay
title Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
title_short Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
title_full Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
title_fullStr Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
title_full_unstemmed Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
title_sort transfer of learning from vision to touch: a hybrid deep convolutional neural network for visuo-tactile 3d object recognition
publisher MDPI AG
series Sensors
issn 1424-8220
publishDate 2021-12-01
description Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.
topic 3D object recognition
transfer learning
machine intelligence
convolutional neural networks
tactile sensors
force-sensing resistor
url https://www.mdpi.com/1424-8220/21/1/113
work_keys_str_mv AT ghazalrouhafzay transferoflearningfromvisiontotouchahybriddeepconvolutionalneuralnetworkforvisuotactile3dobjectrecognition
AT anamariacretu transferoflearningfromvisiontotouchahybriddeepconvolutionalneuralnetworkforvisuotactile3dobjectrecognition
AT pierrepayeur transferoflearningfromvisiontotouchahybriddeepconvolutionalneuralnetworkforvisuotactile3dobjectrecognition
_version_ 1724369069231046656