Summary: | In this paper, we propose an effective source-aware domain enhancement and adaptation (SDEA) approach to increase the accuracy of the existing convolutional neural network-based (CNN-based) object segmentation methods. We first scoop out the source elements, such as the falling-leaves, manhole covers, cirrus clouds, and advertisements, which often cause invalid object segmentation and make the existing object segmentation methods provide unreliable information to the ADAS (automatic driving assistance systems) applications. Secondly, we create a new GTA5-like (Grand Theft Auto V-like) dataset with the scenarios including these source elements. Furthermore, we perform a domain adaptation on the created GTA5-like dataset to generate a photo-realistic GTA5-like dataset, namely GTA5<sub>s</sub><sup>SDEA</sup>. Without the need to relabel the pixel-annotations for GTA5<sub>s</sub><sup>SDEA</sup>, we combine GTA5<sub>s</sub><sup>SDEA</sup> with the realistic dataset, namely Camvid, to constitute a newly enhanced dataset. After retraining the existing CNN-based object segmentation methods by using our enhanced dataset, it can achieve substantial segmentation accuracy improvement. The comprehensive experimental results have demonstrated the clear accuracy improvement merit by applying our SDEA approach to the state-of-the-art object segmentation methods on FCN (Fully Convolutional Networks), SegNet-basic, AdaptSegNet, and Gated-AdaptSegNet, providing more reliable information to ADAS applications.
|