Summary: | Remote sensing images contain various land surface scenes and different scales of ground objects, which greatly increases the difficulty of super-resolution tasks. The existing deep learning-based methods cannot solve this problem well. To achieve high-quality super-resolution of remote sensing images, a residual aggregation and split attentional fusion network (RASAF) is proposed in this article. It is mainly divided into the following three parts. First, a split attentional fusion block is proposed. It uses a basic split–fusion mechanism to achieve cross-channel feature group interaction, allowing the method to adapt to various land surface scene reconstructions. Second, to fully exploit multiscale image information, a hierarchical loss function is used. Third, residual learning is used to reduce the difficulty of training in super-resolution tasks. However, the respective residual branch features are used quite locally and fail to represent the real value. A residual aggregation mechanism is used to aggregate the local residual branch features to generate higher quality local residual branch features. The comparison of RASAF with some classical super-resolution methods using two widely used remote sensing datasets showed that the RASAF achieved better performance. And it achieves a good balance between performance and model parameter number. Meanwhile, the RASAF’s ability to support multilabel remote sensing image classification tasks demonstrates its usefulness.
|