Summary: | Road Detection is a basic task in automated driving field, in which 3D lidar data is commonly used recently. In this paper, we propose to rearrange 3D lidar data into a new organized form to construct direct spatial relationship among point cloud, and put forward new features for real-time road detection tasks. Our model works based on two prerequisites: (1) Road regions are always flatter than non-road regions. (2) Light travels in straight lines in a uniform medium. Based on prerequisite 1, we put forward difference-between-lines feature, while ScanID density and obstacle radial map are generated based on prerequisite 2. According to our method, we construct an array of structures to store and reorganize 3D input firstly. Then, two novel features, difference-between-lines and ScanID density, are extracted, based on which we construct a consistency map and an obstacle map in Bird Eye View (BEV). Finally, the road region is extracted by fusing these two maps and refinement is used to polish up our outcome. We have carried out experiments on the public KITTI-Road benchmark, achieving one of the best performances among the lidar-based road detection methods. To further prove the efficiency of our method on unstructured road, the visual outcomes in rural areas are also proposed.
|