Sentinel-2 Image Fusion Using a Deep Residual Network
Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resoluti...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2018-08-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | http://www.mdpi.com/2072-4292/10/8/1290 |
id |
doaj-db910c0348cc40cf83c6f66281c3599f |
---|---|
record_format |
Article |
spelling |
doaj-db910c0348cc40cf83c6f66281c3599f2020-11-25T02:29:16ZengMDPI AGRemote Sensing2072-42922018-08-01108129010.3390/rs10081290rs10081290Sentinel-2 Image Fusion Using a Deep Residual NetworkFrosti Palsson0Johannes R. Sveinsson1Magnus O. Ulfarsson2Department of Electrical Engineering, University of Iceland, Hjardarhagi 2-6, Reykjavik 107, IcelandDepartment of Electrical Engineering, University of Iceland, Hjardarhagi 2-6, Reykjavik 107, IcelandDepartment of Electrical Engineering, University of Iceland, Hjardarhagi 2-6, Reykjavik 107, IcelandSingle sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments.http://www.mdpi.com/2072-4292/10/8/1290residual neural networkimage fusionconvolutional neural networkSentinel-2 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Frosti Palsson Johannes R. Sveinsson Magnus O. Ulfarsson |
spellingShingle |
Frosti Palsson Johannes R. Sveinsson Magnus O. Ulfarsson Sentinel-2 Image Fusion Using a Deep Residual Network Remote Sensing residual neural network image fusion convolutional neural network Sentinel-2 |
author_facet |
Frosti Palsson Johannes R. Sveinsson Magnus O. Ulfarsson |
author_sort |
Frosti Palsson |
title |
Sentinel-2 Image Fusion Using a Deep Residual Network |
title_short |
Sentinel-2 Image Fusion Using a Deep Residual Network |
title_full |
Sentinel-2 Image Fusion Using a Deep Residual Network |
title_fullStr |
Sentinel-2 Image Fusion Using a Deep Residual Network |
title_full_unstemmed |
Sentinel-2 Image Fusion Using a Deep Residual Network |
title_sort |
sentinel-2 image fusion using a deep residual network |
publisher |
MDPI AG |
series |
Remote Sensing |
issn |
2072-4292 |
publishDate |
2018-08-01 |
description |
Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments. |
topic |
residual neural network image fusion convolutional neural network Sentinel-2 |
url |
http://www.mdpi.com/2072-4292/10/8/1290 |
work_keys_str_mv |
AT frostipalsson sentinel2imagefusionusingadeepresidualnetwork AT johannesrsveinsson sentinel2imagefusionusingadeepresidualnetwork AT magnusoulfarsson sentinel2imagefusionusingadeepresidualnetwork |
_version_ |
1724834215817641984 |