Automatic geo-referencing by integrating camera vision and inertial measurements
Importance of an alternative sensor system to an inertial measurement unit (IMU) is essential for intelligent land navigation systems when the vehicle travels in a GPS deprived environment. The sensor system that has to be used in updating the IMU for a reliable navigation solution has to be a passi...
Main Author: | |
---|---|
Format: | Others |
Published: |
Scholar Commons
2007
|
Subjects: | |
Online Access: | http://scholarcommons.usf.edu/etd/2333 http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=3332&context=etd |
id |
ndltd-USF-oai-scholarcommons.usf.edu-etd-3332 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-USF-oai-scholarcommons.usf.edu-etd-33322015-09-30T04:39:13Z Automatic geo-referencing by integrating camera vision and inertial measurements Randeniya, Duminda I. B Importance of an alternative sensor system to an inertial measurement unit (IMU) is essential for intelligent land navigation systems when the vehicle travels in a GPS deprived environment. The sensor system that has to be used in updating the IMU for a reliable navigation solution has to be a passive sensor system which does not depend on any outside signal. This dissertation presents the results of an effort where position and orientation data from vision and inertial sensors are integrated. Information from a sequence of images captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames per second was used in upgrading the inertial system installed in the same vehicle for its inherent error accumulation. Specifically, the rotations and translations estimated from point correspondences tracked through a sequence of images were used in the integration. However, for such an effort, two types of tasks need to be performed. The first task is the calibration to estimate the intrinsic properties of the vision sensors (cameras), such as the focal length and lens distortion parameters and determination of the transformation between the camera and the inertial systems. Calibration of a two sensor system under indoor conditions does not provide an appropriate and practical transformation for use in outdoor maneuvers due to invariable differences between outdoor and indoor conditions. Also, use of custom calibration objects in outdoor operational conditions is not feasible due to larger field of view that requires relatively large calibration object sizes. Hence calibration becomes one of the critical issues particularly if the integrated system is used in Intelligent Transportation Systems applications. In order to successfully estimate the rotations and translations from vision system the calibration has to be performed prior to the integration process. The second task is the effective fusion of inertial and vision sensor systems. The automated algorithm that identifies point correspondences in images enables its use in real-time autonomous driving maneuvers. In order to verify the accuracy of the established correspondences, independent constraints such as epipolar lines and correspondence flow directions were used. Also a pre-filter was utilized to smoothen out the noise associated with the vision sensor (camera) measurements. A novel approach was used to obtain the geodetic coordinates, i.e. latitude, longitude and altitude, from the normalized translations determined from the vision sensor. Finally, the position locations based on the vision sensor was integrated with those of the inertial system in a decentralized format using a Kalman filter. The vision/inertial integrated position estimates are successfully compared with those from 1) inertial/GPS system output and 2) actual survey performed on the same roadway. This comparison demonstrates that vision can in fact be used successfully to supplement the inertial measurements during potential GPS outages. The derived intrinsic properties and the transformation between individual sensors are also verified during two separate test runs on an actual roadway section. 2007-06-01T07:00:00Z text application/pdf http://scholarcommons.usf.edu/etd/2333 http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=3332&context=etd default Graduate Theses and Dissertations Scholar Commons Multi-sensor fusion Vision/INS integration Intelligent vehicular systems Computer vision Inertial navigation American Studies Arts and Humanities |
collection |
NDLTD |
format |
Others
|
sources |
NDLTD |
topic |
Multi-sensor fusion Vision/INS integration Intelligent vehicular systems Computer vision Inertial navigation American Studies Arts and Humanities |
spellingShingle |
Multi-sensor fusion Vision/INS integration Intelligent vehicular systems Computer vision Inertial navigation American Studies Arts and Humanities Randeniya, Duminda I. B Automatic geo-referencing by integrating camera vision and inertial measurements |
description |
Importance of an alternative sensor system to an inertial measurement unit (IMU) is essential for intelligent land navigation systems when the vehicle travels in a GPS deprived environment. The sensor system that has to be used in updating the IMU for a reliable navigation solution has to be a passive sensor system which does not depend on any outside signal. This dissertation presents the results of an effort where position and orientation data from vision and inertial sensors are integrated. Information from a sequence of images captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames per second was used in upgrading the inertial system installed in the same vehicle for its inherent error accumulation. Specifically, the rotations and translations estimated from point correspondences tracked through a sequence of images were used in the integration. However, for such an effort, two types of tasks need to be performed.
The first task is the calibration to estimate the intrinsic properties of the vision sensors (cameras), such as the focal length and lens distortion parameters and determination of the transformation between the camera and the inertial systems. Calibration of a two sensor system under indoor conditions does not provide an appropriate and practical transformation for use in outdoor maneuvers due to invariable differences between outdoor and indoor conditions. Also, use of custom calibration objects in outdoor operational conditions is not feasible due to larger field of view that requires relatively large calibration object sizes. Hence calibration becomes one of the critical issues particularly if the integrated system is used in Intelligent Transportation Systems applications. In order to successfully estimate the rotations and translations from vision system the calibration has to be performed prior to the integration process.
The second task is the effective fusion of inertial and vision sensor systems. The automated algorithm that identifies point correspondences in images enables its use in real-time autonomous driving maneuvers. In order to verify the accuracy of the established correspondences, independent constraints such as epipolar lines and correspondence flow directions were used. Also a pre-filter was utilized to smoothen out the noise associated with the vision sensor (camera) measurements. A novel approach was used to obtain the geodetic coordinates, i.e. latitude, longitude and altitude, from the normalized translations determined from the vision sensor. Finally, the position locations based on the vision sensor was integrated with those of the inertial system in a decentralized format using a Kalman filter. The vision/inertial integrated position estimates are successfully compared with those from 1) inertial/GPS system output and 2) actual survey performed on the same roadway.
This comparison demonstrates that vision can in fact be used successfully to supplement the inertial measurements during potential GPS outages. The derived intrinsic properties and the transformation between individual sensors are also verified during two separate test runs on an actual roadway section. |
author |
Randeniya, Duminda I. B |
author_facet |
Randeniya, Duminda I. B |
author_sort |
Randeniya, Duminda I. B |
title |
Automatic geo-referencing by integrating camera vision and inertial measurements |
title_short |
Automatic geo-referencing by integrating camera vision and inertial measurements |
title_full |
Automatic geo-referencing by integrating camera vision and inertial measurements |
title_fullStr |
Automatic geo-referencing by integrating camera vision and inertial measurements |
title_full_unstemmed |
Automatic geo-referencing by integrating camera vision and inertial measurements |
title_sort |
automatic geo-referencing by integrating camera vision and inertial measurements |
publisher |
Scholar Commons |
publishDate |
2007 |
url |
http://scholarcommons.usf.edu/etd/2333 http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=3332&context=etd |
work_keys_str_mv |
AT randeniyadumindaib automaticgeoreferencingbyintegratingcameravisionandinertialmeasurements |
_version_ |
1716825012339474432 |