Memory‐ and time‐efficient dense network for single‐image super‐resolution
Abstract Dense connections in convolutional neural networks (CNNs), which connect each layer to every other layer, can compensate for mid/high‐frequency information loss and further enhance high‐frequency signals. However, dense CNNs suffer from high memory usage due to the accumulation of concatena...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-04-01
|
Series: | IET Signal Processing |
Online Access: | https://doi.org/10.1049/sil2.12020 |
id |
doaj-47db0a9f11e54b7ba0abf19bfb62369e |
---|---|
record_format |
Article |
spelling |
doaj-47db0a9f11e54b7ba0abf19bfb62369e2021-08-02T08:25:07ZengWileyIET Signal Processing1751-96751751-96832021-04-0115214115210.1049/sil2.12020Memory‐ and time‐efficient dense network for single‐image super‐resolutionNasrin Imanpour0Ahmad R. Naghsh‐Nilchi1Amirhassan Monadjemi2Hossein Karshenas3Kamal Nasrollahi4Thomas B. Moeslund5Department of Computer Engineering University of Isfahan Isfahan IranDepartment of Computer Engineering University of Isfahan Isfahan IranSchool of Continuing and Lifelong Education National University of Singapore Singapore 138607Department of Computer Engineering University of Isfahan Isfahan IranDepartment of Architecture Design and Media Technology Aalborg University Aalborg DenmarkDepartment of Architecture Design and Media Technology Aalborg University Aalborg DenmarkAbstract Dense connections in convolutional neural networks (CNNs), which connect each layer to every other layer, can compensate for mid/high‐frequency information loss and further enhance high‐frequency signals. However, dense CNNs suffer from high memory usage due to the accumulation of concatenating feature‐maps stored in memory. To overcome this problem, a two‐step approach is proposed that learns the representative concatenating feature‐maps. Specifically, a convolutional layer with many more filters is used before concatenating layers to learn richer feature‐maps. Therefore, the irrelevant and redundant feature‐maps are discarded in the concatenating layers. The proposed method results in 24% and 6% less memory usage and test time, respectively, in comparison to single‐image super‐resolution (SISR) with the basic dense block. It also improves the peak signal‐to‐noise ratio by 0.24 dB. Moreover, the proposed method, while producing competitive results, decreases the number of filters in concatenating layers by at least a factor of 2 and reduces the memory consumption and test time by 40% and 12%, respectively. These results suggest that the proposed approach is a more practical method for SISR.https://doi.org/10.1049/sil2.12020 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Nasrin Imanpour Ahmad R. Naghsh‐Nilchi Amirhassan Monadjemi Hossein Karshenas Kamal Nasrollahi Thomas B. Moeslund |
spellingShingle |
Nasrin Imanpour Ahmad R. Naghsh‐Nilchi Amirhassan Monadjemi Hossein Karshenas Kamal Nasrollahi Thomas B. Moeslund Memory‐ and time‐efficient dense network for single‐image super‐resolution IET Signal Processing |
author_facet |
Nasrin Imanpour Ahmad R. Naghsh‐Nilchi Amirhassan Monadjemi Hossein Karshenas Kamal Nasrollahi Thomas B. Moeslund |
author_sort |
Nasrin Imanpour |
title |
Memory‐ and time‐efficient dense network for single‐image super‐resolution |
title_short |
Memory‐ and time‐efficient dense network for single‐image super‐resolution |
title_full |
Memory‐ and time‐efficient dense network for single‐image super‐resolution |
title_fullStr |
Memory‐ and time‐efficient dense network for single‐image super‐resolution |
title_full_unstemmed |
Memory‐ and time‐efficient dense network for single‐image super‐resolution |
title_sort |
memory‐ and time‐efficient dense network for single‐image super‐resolution |
publisher |
Wiley |
series |
IET Signal Processing |
issn |
1751-9675 1751-9683 |
publishDate |
2021-04-01 |
description |
Abstract Dense connections in convolutional neural networks (CNNs), which connect each layer to every other layer, can compensate for mid/high‐frequency information loss and further enhance high‐frequency signals. However, dense CNNs suffer from high memory usage due to the accumulation of concatenating feature‐maps stored in memory. To overcome this problem, a two‐step approach is proposed that learns the representative concatenating feature‐maps. Specifically, a convolutional layer with many more filters is used before concatenating layers to learn richer feature‐maps. Therefore, the irrelevant and redundant feature‐maps are discarded in the concatenating layers. The proposed method results in 24% and 6% less memory usage and test time, respectively, in comparison to single‐image super‐resolution (SISR) with the basic dense block. It also improves the peak signal‐to‐noise ratio by 0.24 dB. Moreover, the proposed method, while producing competitive results, decreases the number of filters in concatenating layers by at least a factor of 2 and reduces the memory consumption and test time by 40% and 12%, respectively. These results suggest that the proposed approach is a more practical method for SISR. |
url |
https://doi.org/10.1049/sil2.12020 |
work_keys_str_mv |
AT nasrinimanpour memoryandtimeefficientdensenetworkforsingleimagesuperresolution AT ahmadrnaghshnilchi memoryandtimeefficientdensenetworkforsingleimagesuperresolution AT amirhassanmonadjemi memoryandtimeefficientdensenetworkforsingleimagesuperresolution AT hosseinkarshenas memoryandtimeefficientdensenetworkforsingleimagesuperresolution AT kamalnasrollahi memoryandtimeefficientdensenetworkforsingleimagesuperresolution AT thomasbmoeslund memoryandtimeefficientdensenetworkforsingleimagesuperresolution |
_version_ |
1721238371351658496 |