Memory‐ and time‐efficient dense network for single‐image super‐resolution

Abstract Dense connections in convolutional neural networks (CNNs), which connect each layer to every other layer, can compensate for mid/high‐frequency information loss and further enhance high‐frequency signals. However, dense CNNs suffer from high memory usage due to the accumulation of concatena...

Full description

Bibliographic Details
Main Authors: Nasrin Imanpour, Ahmad R. Naghsh‐Nilchi, Amirhassan Monadjemi, Hossein Karshenas, Kamal Nasrollahi, Thomas B. Moeslund
Format: Article
Language:English
Published: Wiley 2021-04-01
Series:IET Signal Processing
Online Access:https://doi.org/10.1049/sil2.12020
Description
Summary:Abstract Dense connections in convolutional neural networks (CNNs), which connect each layer to every other layer, can compensate for mid/high‐frequency information loss and further enhance high‐frequency signals. However, dense CNNs suffer from high memory usage due to the accumulation of concatenating feature‐maps stored in memory. To overcome this problem, a two‐step approach is proposed that learns the representative concatenating feature‐maps. Specifically, a convolutional layer with many more filters is used before concatenating layers to learn richer feature‐maps. Therefore, the irrelevant and redundant feature‐maps are discarded in the concatenating layers. The proposed method results in 24% and 6% less memory usage and test time, respectively, in comparison to single‐image super‐resolution (SISR) with the basic dense block. It also improves the peak signal‐to‐noise ratio by 0.24 dB. Moreover, the proposed method, while producing competitive results, decreases the number of filters in concatenating layers by at least a factor of 2 and reduces the memory consumption and test time by 40% and 12%, respectively. These results suggest that the proposed approach is a more practical method for SISR.
ISSN:1751-9675
1751-9683