Summary: | 碩士 === 國立交通大學 === 電子研究所 === 106 === Lane mark detection is an essential component in the road scene analysis for Advanced Driver Assistant System (ADAS). Although the deep-learning based road scene segmentation can achieve very high accuracy, limited by the onboard computing power, it is still a challenge to reduce system complexity and to maintain high accuracy at the same time. In this thesis, we incorporate the deep convolutional neural network into a lane detection algorithm in order to extract the robust lane mark features. To improve its performance with a target of lower complexity, we investigate the advantages and disadvantages of several popular CNN architectures in terms of speed and storage. We start from SegNet with VGG and continue to study the Fully Convolutional Network (FCN), ResNet, and DenseNet. Through detailed experiments, we pick up favorable components from the existing architectures and at the end, a light network architecture is thus constructed based on the structure of DenseNet.
Our proposed network demonstrates a real-time testing (inferencing) ability and it maintains an accuracy comparable with most previous systems. We test our system on several datasets including the challenging Cityscapes dataset (resolution of 1024×512) with a mIoU of about 69.1 %. We also design a more accurate model but at the price of a slower speed with a mIoU of about 72.9 % on the CamVid dataset. Moreover, we also design a post-processing algorithm to group segments of a lane into a connected curve and construct a 3rd- order polynomial model to fit into a curved lane. Our system shows promising results on the captured road scenes.
|