Summary: | Reducing the impact of hazy images on subsequent visual information processing is a challenging problem. In this paper, combining with atmospheric scattering model, we propose an end-to-end multi-scale feature multiple parallel fusion network called MMP-Net for single image haze removal. The MMP-Net includes three components: multi-scale CNN module, residual learning module and deep parallel fusion module. 1) In multi-scale CNN module, a multi-scale convolutional neural network (CNNs) is adopted to extract different scales features from whole to local, and these features are fused multiple times in parallel. 2) In residual learning module, residual blocks are introduced to deeply learn detailed features, which can recover more image details. 3) In deep parallel fusion module, those features from residual learning module are deeply merged with the fused features from CNNs, and finally used to recover a clean haze-free image via the atmospheric scattering model. The experimental results show that on the average of three datasets (SOTS, HSTS, and D-Hazy), proposed MMP-Net improves PSNR from 20.91db to 22.21db and SSIM from 0.8720 to 0.9023 over the best state-of-the-art DehazeNet method. What's more, MMP-Net gains the best subjective visual quality on real-world hazy images.
|