Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing Videos
The application of deep learning techniques in background subtraction for VHR optical remote sensing videos holds the potential to facilitate multiple intelligent remote sensing processing tasks. However, existing methods on background subtraction for VHR optical remote sensing videos are still faci...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9123388/ |
id |
doaj-dcea148ffbed4ee08df24e9e82d26207 |
---|---|
record_format |
Article |
spelling |
doaj-dcea148ffbed4ee08df24e9e82d262072021-03-30T01:52:37ZengIEEEIEEE Access2169-35362020-01-01811914411915710.1109/ACCESS.2020.30044959123388Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing VideosWentao Yu0https://orcid.org/0000-0003-1416-5255Jing Bai1https://orcid.org/0000-0001-5412-7793Licheng Jiao2https://orcid.org/0000-0003-3354-9617School of Artificial Intelligence, Xidian University, Xi’an, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an, ChinaThe application of deep learning techniques in background subtraction for VHR optical remote sensing videos holds the potential to facilitate multiple intelligent remote sensing processing tasks. However, existing methods on background subtraction for VHR optical remote sensing videos are still facing technical challenges. First, conventional CNN and other networks are limited by performance constraints. Second, existing background subtraction methods are mostly trained by natural videos due to the lack of VHR optical remote sensing video datasets. Third, VHR optical remote sensing videos have large scene sizes. In our article, we design a novel deep learning network via fully utilizing GAN and domain adaptation, which has the ability to measure and minimize the discrepancy between feature distributions of natural videos and VHR optical remote sensing videos so that the background subtraction performance for VHR optical remote sensing videos is improved significantly. Numerous experiments are conducted on the CDnet 2014 dataset and VHR optical remote sensing video dataset. Tremendous experiments demonstrate that our proposed method achieves an average FM of 0.8533, which reveals excellent performance on background subtraction.https://ieeexplore.ieee.org/document/9123388/Background subtractiongenerative adversarial networks (GAN)domain adaptationvery high resolution (VHR) optical remote sensing videos |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Wentao Yu Jing Bai Licheng Jiao |
spellingShingle |
Wentao Yu Jing Bai Licheng Jiao Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing Videos IEEE Access Background subtraction generative adversarial networks (GAN) domain adaptation very high resolution (VHR) optical remote sensing videos |
author_facet |
Wentao Yu Jing Bai Licheng Jiao |
author_sort |
Wentao Yu |
title |
Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing Videos |
title_short |
Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing Videos |
title_full |
Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing Videos |
title_fullStr |
Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing Videos |
title_full_unstemmed |
Background Subtraction Based on GAN and Domain Adaptation for VHR Optical Remote Sensing Videos |
title_sort |
background subtraction based on gan and domain adaptation for vhr optical remote sensing videos |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
The application of deep learning techniques in background subtraction for VHR optical remote sensing videos holds the potential to facilitate multiple intelligent remote sensing processing tasks. However, existing methods on background subtraction for VHR optical remote sensing videos are still facing technical challenges. First, conventional CNN and other networks are limited by performance constraints. Second, existing background subtraction methods are mostly trained by natural videos due to the lack of VHR optical remote sensing video datasets. Third, VHR optical remote sensing videos have large scene sizes. In our article, we design a novel deep learning network via fully utilizing GAN and domain adaptation, which has the ability to measure and minimize the discrepancy between feature distributions of natural videos and VHR optical remote sensing videos so that the background subtraction performance for VHR optical remote sensing videos is improved significantly. Numerous experiments are conducted on the CDnet 2014 dataset and VHR optical remote sensing video dataset. Tremendous experiments demonstrate that our proposed method achieves an average FM of 0.8533, which reveals excellent performance on background subtraction. |
topic |
Background subtraction generative adversarial networks (GAN) domain adaptation very high resolution (VHR) optical remote sensing videos |
url |
https://ieeexplore.ieee.org/document/9123388/ |
work_keys_str_mv |
AT wentaoyu backgroundsubtractionbasedongananddomainadaptationforvhropticalremotesensingvideos AT jingbai backgroundsubtractionbasedongananddomainadaptationforvhropticalremotesensingvideos AT lichengjiao backgroundsubtractionbasedongananddomainadaptationforvhropticalremotesensingvideos |
_version_ |
1724186268395372544 |