Visual SLAM using sparse maps based on feature points

Visual Simultaneous Localisation And Mapping is a useful tool forcreating 3D environments with feature points. These visual systemscould be very valuable in autonomous vehicles to improve the localisation.Cameras being a fairly cheap sensor with the capabilityto gather a large amount of data. More e...

Full description

Bibliographic Details
Main Authors: Brunnegård, Oliver, Wikestad, Daniel
Format: Others
Language:English
Published: Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS) 2017
Subjects:
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-34681
Description
Summary:Visual Simultaneous Localisation And Mapping is a useful tool forcreating 3D environments with feature points. These visual systemscould be very valuable in autonomous vehicles to improve the localisation.Cameras being a fairly cheap sensor with the capabilityto gather a large amount of data. More efficient algorithms are stillneeded to better interpret the most valuable information. This paperanalyses how much a feature based map can be reduced without losingsignificant accuracy during localising. Semantic segmentation created by a deep neural network is used toclassify the features used to create the map, the map is reduced by removingcertain classes. The results show that feature based maps cansignificantly be reduced without losing accuracy. The use of classesresulted in promising results, large amounts of feature were removedbut the system could still localise accurately. Removing some classesgave the same results or even better in certain weather conditionscompared to localisation with a full-scale map.