Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction
When driving, people make decisions based on current traffic as well as their desired route. They have a mental map of known routes and are often able to navigate without needing directions. Current published self-driving models improve their performances when using additional GPS information. Here...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/3/852 |
id |
doaj-346e82342c494fabbf92d993d9c530b7 |
---|---|
record_format |
Article |
spelling |
doaj-346e82342c494fabbf92d993d9c530b72021-01-28T00:06:17ZengMDPI AGSensors1424-82202021-01-012185285210.3390/s21030852Driven by Vision: Learning Navigation by Visual Localization and Trajectory PredictionMarius Leordeanu0Iulia Paraicu1Institute of Mathematics of the Romanian Academy (IMAR), Calea Grivitei 21, 010702 Bucharest, RomaniaInstitute of Mathematics of the Romanian Academy (IMAR), Calea Grivitei 21, 010702 Bucharest, RomaniaWhen driving, people make decisions based on current traffic as well as their desired route. They have a mental map of known routes and are often able to navigate without needing directions. Current published self-driving models improve their performances when using additional GPS information. Here we aim to push forward self-driving research and perform route planning even in the complete absence of GPS at inference time. Our system learns to predict in real-time vehicle’s current location and future trajectory, on a known map, given only the raw video stream and the final destination. Trajectories consist of instant steering commands that depend on present traffic, as well as longer-term navigation decisions towards a specific destination. Along with our novel proposed approach to localization and navigation from visual data, we also introduce a novel large dataset in an urban environment, which consists of video and GPS streams collected with a smartphone while driving. The GPS is automatically processed to obtain supervision labels and to create an analytical representation of the traversed map. In tests, our solution outperforms published state of the art methods on visual localization and steering and provides reliable navigation assistance between any two known locations. We also show that our system can adapt to short and long-term changes in weather conditions or the structure of the urban environment. We make the entire dataset and the code publicly available.https://www.mdpi.com/1424-8220/21/3/852autonomous drivingself-drivingvisual localizationvisual navigationdeep learningtrajectory prediction |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Marius Leordeanu Iulia Paraicu |
spellingShingle |
Marius Leordeanu Iulia Paraicu Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction Sensors autonomous driving self-driving visual localization visual navigation deep learning trajectory prediction |
author_facet |
Marius Leordeanu Iulia Paraicu |
author_sort |
Marius Leordeanu |
title |
Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction |
title_short |
Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction |
title_full |
Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction |
title_fullStr |
Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction |
title_full_unstemmed |
Driven by Vision: Learning Navigation by Visual Localization and Trajectory Prediction |
title_sort |
driven by vision: learning navigation by visual localization and trajectory prediction |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2021-01-01 |
description |
When driving, people make decisions based on current traffic as well as their desired route. They have a mental map of known routes and are often able to navigate without needing directions. Current published self-driving models improve their performances when using additional GPS information. Here we aim to push forward self-driving research and perform route planning even in the complete absence of GPS at inference time. Our system learns to predict in real-time vehicle’s current location and future trajectory, on a known map, given only the raw video stream and the final destination. Trajectories consist of instant steering commands that depend on present traffic, as well as longer-term navigation decisions towards a specific destination. Along with our novel proposed approach to localization and navigation from visual data, we also introduce a novel large dataset in an urban environment, which consists of video and GPS streams collected with a smartphone while driving. The GPS is automatically processed to obtain supervision labels and to create an analytical representation of the traversed map. In tests, our solution outperforms published state of the art methods on visual localization and steering and provides reliable navigation assistance between any two known locations. We also show that our system can adapt to short and long-term changes in weather conditions or the structure of the urban environment. We make the entire dataset and the code publicly available. |
topic |
autonomous driving self-driving visual localization visual navigation deep learning trajectory prediction |
url |
https://www.mdpi.com/1424-8220/21/3/852 |
work_keys_str_mv |
AT mariusleordeanu drivenbyvisionlearningnavigationbyvisuallocalizationandtrajectoryprediction AT iuliaparaicu drivenbyvisionlearningnavigationbyvisuallocalizationandtrajectoryprediction |
_version_ |
1724320195514728448 |