A Deep Neural Network Sensor for Visual Servoing in 3D Spaces

This paper describes a novel stereo vision sensor based on deep neural networks, that can be used to produce a feedback signal for visual servoing in unmanned aerial vehicles such as drones. Two deep convolutional neural networks attached to the stereo camera in the drone are trained to detect wind...

Full description

Bibliographic Details
Main Authors: Petar Durdevic, Daniel Ortiz-Arroyo
Format: Article
Language:English
Published: MDPI AG 2020-03-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/5/1437
Description
Summary:This paper describes a novel stereo vision sensor based on deep neural networks, that can be used to produce a feedback signal for visual servoing in unmanned aerial vehicles such as drones. Two deep convolutional neural networks attached to the stereo camera in the drone are trained to detect wind turbines in images and stereo triangulation is used to calculate the distance from a wind turbine to the drone. Our experimental results show that the sensor produces data accurate enough to be used for servoing, even in the presence of noise generated when the drone is not being completely stable. Our results also show that appropriate filtering of the signals is needed and that to produce correct results, it is very important to keep the wind turbine within the field of vision of both cameras, so that both deep neural networks could detect it.
ISSN:1424-8220