Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation
In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by usi...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-11-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/19/21/4818 |
id |
doaj-14ffe4ea5a5b47f88c7b7555566312cd |
---|---|
record_format |
Article |
spelling |
doaj-14ffe4ea5a5b47f88c7b7555566312cd2020-11-25T01:48:11ZengMDPI AGSensors1424-82202019-11-011921481810.3390/s19214818s19214818Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image GenerationHyun-Koo Kim0Kook-Yeol Yoo1Ju H. Park2Ho-Youl Jung3Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38544, KoreaDepartment of Information and Communication Engineering, Yeungnam University, Gyeongsan 38544, KoreaDepartment of Electrical Engineering, Yeungnam University, Gyeongsan 38544, KoreaDepartment of Information and Communication Engineering, Yeungnam University, Gyeongsan 38544, KoreaIn this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by using a fully convolutional network (FCN). The color image should be generated from a very sparse projected intensity image. For this reason, the FCN is designed to have an asymmetric network structure, i.e., the layer depth of the decoder in the FCN is deeper than that of the encoder. The well-known KITTI dataset for various scenarios is used for the proposed FCN training and performance evaluation. Performance of the asymmetric network structures are empirically analyzed for various depth combinations for the encoder and decoder. Through simulations, it is shown that the proposed method generates fairly good visual quality of images while maintaining almost the same color as the ground truth image. Moreover, the proposed FCN has much higher performance than conventional interpolation methods and generative adversarial network based Pix2Pix. One interesting result is that the proposed FCN produces shadow-free and daylight color images. This result is caused by the fact that the LiDAR sensor data is produced by the light reflection and is, therefore, not affected by sunlight and shadow.https://www.mdpi.com/1424-8220/19/21/4818advanced driver assistance systemasymmetric network modelimage generationlidar sensorlidar imaging |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Hyun-Koo Kim Kook-Yeol Yoo Ju H. Park Ho-Youl Jung |
spellingShingle |
Hyun-Koo Kim Kook-Yeol Yoo Ju H. Park Ho-Youl Jung Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation Sensors advanced driver assistance system asymmetric network model image generation lidar sensor lidar imaging |
author_facet |
Hyun-Koo Kim Kook-Yeol Yoo Ju H. Park Ho-Youl Jung |
author_sort |
Hyun-Koo Kim |
title |
Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation |
title_short |
Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation |
title_full |
Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation |
title_fullStr |
Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation |
title_full_unstemmed |
Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation |
title_sort |
asymmetric encoder-decoder structured fcn based lidar to color image generation |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2019-11-01 |
description |
In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by using a fully convolutional network (FCN). The color image should be generated from a very sparse projected intensity image. For this reason, the FCN is designed to have an asymmetric network structure, i.e., the layer depth of the decoder in the FCN is deeper than that of the encoder. The well-known KITTI dataset for various scenarios is used for the proposed FCN training and performance evaluation. Performance of the asymmetric network structures are empirically analyzed for various depth combinations for the encoder and decoder. Through simulations, it is shown that the proposed method generates fairly good visual quality of images while maintaining almost the same color as the ground truth image. Moreover, the proposed FCN has much higher performance than conventional interpolation methods and generative adversarial network based Pix2Pix. One interesting result is that the proposed FCN produces shadow-free and daylight color images. This result is caused by the fact that the LiDAR sensor data is produced by the light reflection and is, therefore, not affected by sunlight and shadow. |
topic |
advanced driver assistance system asymmetric network model image generation lidar sensor lidar imaging |
url |
https://www.mdpi.com/1424-8220/19/21/4818 |
work_keys_str_mv |
AT hyunkookim asymmetricencoderdecoderstructuredfcnbasedlidartocolorimagegeneration AT kookyeolyoo asymmetricencoderdecoderstructuredfcnbasedlidartocolorimagegeneration AT juhpark asymmetricencoderdecoderstructuredfcnbasedlidartocolorimagegeneration AT hoyouljung asymmetricencoderdecoderstructuredfcnbasedlidartocolorimagegeneration |
_version_ |
1725012450982494208 |