Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

Place recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrie...

Full description

Bibliographic Details
Main Authors: Huan Yin, Xuecheng Xu, Yue Wang, Rong Xiong
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-05-01
Series:Frontiers in Robotics and AI
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frobt.2021.661199/full
id doaj-bdb370aab89c407b81f8f28e1acc7cf3
record_format Article
spelling doaj-bdb370aab89c407b81f8f28e1acc7cf32021-05-17T13:03:01ZengFrontiers Media S.A.Frontiers in Robotics and AI2296-91442021-05-01810.3389/frobt.2021.661199661199Radar-to-Lidar: Heterogeneous Place Recognition via Joint LearningHuan YinXuecheng XuYue WangRong XiongPlace recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps. To achieve this, a deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition. To validate the effectiveness of the proposed method, we conducted tests and generalization experiments on the multi-session public datasets and compared them to other competitive methods. The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar (L2L), radar-to-radar (R2R), and radar-to-lidar (R2L), while the learned model is trained only once. We also release the source code publicly: https://github.com/ZJUYH/radar-to-lidar-place-recognition.https://www.frontiersin.org/articles/10.3389/frobt.2021.661199/fullradarlidarheterogeneous measurementsplace recognitiondeep neural networkmobile robot
collection DOAJ
language English
format Article
sources DOAJ
author Huan Yin
Xuecheng Xu
Yue Wang
Rong Xiong
spellingShingle Huan Yin
Xuecheng Xu
Yue Wang
Rong Xiong
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
Frontiers in Robotics and AI
radar
lidar
heterogeneous measurements
place recognition
deep neural network
mobile robot
author_facet Huan Yin
Xuecheng Xu
Yue Wang
Rong Xiong
author_sort Huan Yin
title Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
title_short Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
title_full Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
title_fullStr Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
title_full_unstemmed Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning
title_sort radar-to-lidar: heterogeneous place recognition via joint learning
publisher Frontiers Media S.A.
series Frontiers in Robotics and AI
issn 2296-9144
publishDate 2021-05-01
description Place recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps. To achieve this, a deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition. To validate the effectiveness of the proposed method, we conducted tests and generalization experiments on the multi-session public datasets and compared them to other competitive methods. The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar (L2L), radar-to-radar (R2R), and radar-to-lidar (R2L), while the learned model is trained only once. We also release the source code publicly: https://github.com/ZJUYH/radar-to-lidar-place-recognition.
topic radar
lidar
heterogeneous measurements
place recognition
deep neural network
mobile robot
url https://www.frontiersin.org/articles/10.3389/frobt.2021.661199/full
work_keys_str_mv AT huanyin radartolidarheterogeneousplacerecognitionviajointlearning
AT xuechengxu radartolidarheterogeneousplacerecognitionviajointlearning
AT yuewang radartolidarheterogeneousplacerecognitionviajointlearning
AT rongxiong radartolidarheterogeneousplacerecognitionviajointlearning
_version_ 1721438454453108736