Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform
With the coming of “Internet of things” (IoT) technology, many studies have sought to apply IoT to mobile platforms, such as smartphones, robots, and moving vehicles. An estimation of ego-motion in a moving platform is an essential and important method to build a map and to understand the surroundin...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2015-10-01
|
Series: | International Journal of Distributed Sensor Networks |
Online Access: | https://doi.org/10.1155/2015/831780 |
id |
doaj-e91f57abd3814d24a51187e42d56fa9e |
---|---|
record_format |
Article |
spelling |
doaj-e91f57abd3814d24a51187e42d56fa9e2020-11-25T03:39:34ZengSAGE PublishingInternational Journal of Distributed Sensor Networks1550-14772015-10-011110.1155/2015/831780831780Sensor Fusion for Accurate Ego-Motion Estimation in a Moving PlatformChuho Yi0Jungwon Cho1 ADAS Department, LG Electronics, Incheon 22744, Republic of Korea Department of Computer Education, Jeju National University, Jeju 63243, Republic of KoreaWith the coming of “Internet of things” (IoT) technology, many studies have sought to apply IoT to mobile platforms, such as smartphones, robots, and moving vehicles. An estimation of ego-motion in a moving platform is an essential and important method to build a map and to understand the surrounding environment. In this paper, we describe an ego-motion estimation method using a vision sensor that is widely used in IoT systems. Then, we propose a new fusion method to improve the accuracy of motion estimation with other sensors in cases where there are limits in using only a vision sensor. Generally, because the dimension numbers of data that can be measured for each sensor are different, by simply adding values or taking averages, there is still a problem in that the answer will be biased to one of the data sources. These problems are the same when using the weighting sum using the covariance of the sensors. To solve this problem, in this paper, using relatively accurate sensor data (unfortunately, low dimension), the proposed method was used to estimate by creating artificial data to improve the accuracy (even of unmeasured dimensions).https://doi.org/10.1155/2015/831780 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Chuho Yi Jungwon Cho |
spellingShingle |
Chuho Yi Jungwon Cho Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform International Journal of Distributed Sensor Networks |
author_facet |
Chuho Yi Jungwon Cho |
author_sort |
Chuho Yi |
title |
Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform |
title_short |
Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform |
title_full |
Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform |
title_fullStr |
Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform |
title_full_unstemmed |
Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform |
title_sort |
sensor fusion for accurate ego-motion estimation in a moving platform |
publisher |
SAGE Publishing |
series |
International Journal of Distributed Sensor Networks |
issn |
1550-1477 |
publishDate |
2015-10-01 |
description |
With the coming of “Internet of things” (IoT) technology, many studies have sought to apply IoT to mobile platforms, such as smartphones, robots, and moving vehicles. An estimation of ego-motion in a moving platform is an essential and important method to build a map and to understand the surrounding environment. In this paper, we describe an ego-motion estimation method using a vision sensor that is widely used in IoT systems. Then, we propose a new fusion method to improve the accuracy of motion estimation with other sensors in cases where there are limits in using only a vision sensor. Generally, because the dimension numbers of data that can be measured for each sensor are different, by simply adding values or taking averages, there is still a problem in that the answer will be biased to one of the data sources. These problems are the same when using the weighting sum using the covariance of the sensors. To solve this problem, in this paper, using relatively accurate sensor data (unfortunately, low dimension), the proposed method was used to estimate by creating artificial data to improve the accuracy (even of unmeasured dimensions). |
url |
https://doi.org/10.1155/2015/831780 |
work_keys_str_mv |
AT chuhoyi sensorfusionforaccurateegomotionestimationinamovingplatform AT jungwoncho sensorfusionforaccurateegomotionestimationinamovingplatform |
_version_ |
1724537991927431168 |