Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor

Because of the complementary nature of visual and inertial sensors, the combination of both is able to provide fast and accurate 6 degree-of-freedom state estimation, which is the fundamental requirement for robotic (especially, unmanned aerial vehicle) navigation tasks in Global Positioning System–...

Full description

Bibliographic Details
Main Authors: Chang Liu, Stephen D Prior, WT Luke Teacy, Martin Warner
Format: Article
Language:English
Published: SAGE Publishing 2016-03-01
Series:Advances in Mechanical Engineering
Online Access:https://doi.org/10.1177/1687814016640996
id doaj-921d14f8062d4c4b9bf0e3939d085f99
record_format Article
spelling doaj-921d14f8062d4c4b9bf0e3939d085f992020-11-25T03:16:51ZengSAGE PublishingAdvances in Mechanical Engineering1687-81402016-03-01810.1177/168781401664099610.1177_1687814016640996Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotorChang LiuStephen D PriorWT Luke TeacyMartin WarnerBecause of the complementary nature of visual and inertial sensors, the combination of both is able to provide fast and accurate 6 degree-of-freedom state estimation, which is the fundamental requirement for robotic (especially, unmanned aerial vehicle) navigation tasks in Global Positioning System–denied environments. This article presents a computationally efficient visual–inertial fusion algorithm, by separating orientation fusion from the position fusion process. The algorithm is designed to perform 6 degree-of-freedom state estimation, based on a gyroscope, an accelerometer and a monocular visual-based simultaneous localisation and mapping algorithm measurement. It also recovers the visual scale for the monocular visual-based simultaneous localisation and mapping. In particular, the fusion algorithm treats the orientation fusion and position fusion as two separate processes, where the orientation fusion is based on a very efficient gradient descent algorithm, whereas the position fusion is based on a 13-state linear Kalman filter. The elimination of the magnetometer sensor avoids the problem of magnetic distortion, which makes it a power-on-and-go system once the accelerometer is factory calibrated. The resulting algorithm shows a significant computational reduction over the conventional extended Kalman filter, with competitive accuracy. Moreover, the separation between orientation and position fusion processes enables the algorithm to be easily implemented onto two individual hardware elements and thus allows the two fusion processes to be executed concurrently.https://doi.org/10.1177/1687814016640996
collection DOAJ
language English
format Article
sources DOAJ
author Chang Liu
Stephen D Prior
WT Luke Teacy
Martin Warner
spellingShingle Chang Liu
Stephen D Prior
WT Luke Teacy
Martin Warner
Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor
Advances in Mechanical Engineering
author_facet Chang Liu
Stephen D Prior
WT Luke Teacy
Martin Warner
author_sort Chang Liu
title Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor
title_short Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor
title_full Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor
title_fullStr Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor
title_full_unstemmed Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor
title_sort computationally efficient visual–inertial sensor fusion for global positioning system–denied navigation on a small quadrotor
publisher SAGE Publishing
series Advances in Mechanical Engineering
issn 1687-8140
publishDate 2016-03-01
description Because of the complementary nature of visual and inertial sensors, the combination of both is able to provide fast and accurate 6 degree-of-freedom state estimation, which is the fundamental requirement for robotic (especially, unmanned aerial vehicle) navigation tasks in Global Positioning System–denied environments. This article presents a computationally efficient visual–inertial fusion algorithm, by separating orientation fusion from the position fusion process. The algorithm is designed to perform 6 degree-of-freedom state estimation, based on a gyroscope, an accelerometer and a monocular visual-based simultaneous localisation and mapping algorithm measurement. It also recovers the visual scale for the monocular visual-based simultaneous localisation and mapping. In particular, the fusion algorithm treats the orientation fusion and position fusion as two separate processes, where the orientation fusion is based on a very efficient gradient descent algorithm, whereas the position fusion is based on a 13-state linear Kalman filter. The elimination of the magnetometer sensor avoids the problem of magnetic distortion, which makes it a power-on-and-go system once the accelerometer is factory calibrated. The resulting algorithm shows a significant computational reduction over the conventional extended Kalman filter, with competitive accuracy. Moreover, the separation between orientation and position fusion processes enables the algorithm to be easily implemented onto two individual hardware elements and thus allows the two fusion processes to be executed concurrently.
url https://doi.org/10.1177/1687814016640996
work_keys_str_mv AT changliu computationallyefficientvisualinertialsensorfusionforglobalpositioningsystemdeniednavigationonasmallquadrotor
AT stephendprior computationallyefficientvisualinertialsensorfusionforglobalpositioningsystemdeniednavigationonasmallquadrotor
AT wtluketeacy computationallyefficientvisualinertialsensorfusionforglobalpositioningsystemdeniednavigationonasmallquadrotor
AT martinwarner computationallyefficientvisualinertialsensorfusionforglobalpositioningsystemdeniednavigationonasmallquadrotor
_version_ 1724634579661225984