Parallel global convolutional network for semantic image segmentation
Abstract In this paper, a novel convolutional neural network for fast semantic segmentation is presented. Deep convolutional neural networks have achieved great progress in the task of vision scene understanding. While the increase of the accuracy mainly depends on the increase of depth and width. T...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-01-01
|
Series: | IET Image Processing |
Online Access: | https://doi.org/10.1049/ipr2.12025 |
Summary: | Abstract In this paper, a novel convolutional neural network for fast semantic segmentation is presented. Deep convolutional neural networks have achieved great progress in the task of vision scene understanding. While the increase of the accuracy mainly depends on the increase of depth and width. This slows down large networks and consumes power. A fast and efficient convolutional neural network, PGCNet, aiming at segmenting high‐resolution images with a high speed is introduced. Compared with the competitive methods, the generated model has high performance with fewer parameters and floating point operations. First, a lightweight general architecture pre‐trained on ImageNet is relied on as the main encoder. Then, a novel lateral connection module to better transmit features from encoder to decoder. Third, a powerful method termed as PGCN block to extract features of each block in the encoder is proposed and an edge decoder is applied as a supervision for pixels on the edge of stuff and things during training. Experiments show that this method has great advantages. Based on the proposed PGCNet, 75.8% mean IoU is achieved on the cityscapes test set and 35.4 Hz on a standard Cityscapes image on GTX1080Ti. |
---|---|
ISSN: | 1751-9659 1751-9667 |