Arbitrary-oriented target detection in large scene sar images

Target detection in the field of synthetic aperture radar (SAR) has attracted considerable attention of researchers in national defense technology worldwide, owing to its unique advantages like high resolution and large scene image acquisition capabilities of SAR. However, due to strong speckle nois...

Full description

Bibliographic Details
Main Authors: Zi-shuo Han, Chun-ping Wang, Qiang Fu
Format: Article
Language:English
Published: KeAi Communications Co., Ltd. 2020-08-01
Series:Defence Technology
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2214914719306968
id doaj-addbe4f3625c46c996e1bfdf356d63c7
record_format Article
spelling doaj-addbe4f3625c46c996e1bfdf356d63c72021-05-02T23:09:10ZengKeAi Communications Co., Ltd.Defence Technology2214-91472020-08-01164933946Arbitrary-oriented target detection in large scene sar imagesZi-shuo Han0Chun-ping Wang1Qiang Fu2Shijiazhuang Campus, Army Engineering University, Shijiazhuang, 050003, ChinaCorresponding author.; Shijiazhuang Campus, Army Engineering University, Shijiazhuang, 050003, ChinaShijiazhuang Campus, Army Engineering University, Shijiazhuang, 050003, ChinaTarget detection in the field of synthetic aperture radar (SAR) has attracted considerable attention of researchers in national defense technology worldwide, owing to its unique advantages like high resolution and large scene image acquisition capabilities of SAR. However, due to strong speckle noise and low signal-to-noise ratio, it is difficult to extract representative features of target from SAR images, which greatly inhibits the effectiveness of traditional methods. In order to address the above problems, a framework called contextual rotation region-based convolutional neural network (RCNN) with multilayer fusion is proposed in this paper. Specifically, aimed to enable RCNN to perform target detection in large scene SAR images efficiently, maximum sliding strategy is applied to crop the large scene image into a series of sub-images before RCNN. Instead of using the highest-layer output for proposal generation and target detection, fusion feature maps with high resolution and rich semantic information are constructed by multilayer fusion strategy. Then, we put forwards rotation anchors to predict the minimum circumscribed rectangle of targets to reduce redundant detection region. Furthermore, shadow areas serve as contextual features to provide extraneous information for the detector identify and locate targets accurately. Experimental results on the simulated large scene SAR image dataset show that the proposed method achieves a satisfactory performance in large scene SAR target detection.http://www.sciencedirect.com/science/article/pii/S2214914719306968Target detectionConvolutional neural networkMultilayer fusionContext informationSynthetic aperture radar
collection DOAJ
language English
format Article
sources DOAJ
author Zi-shuo Han
Chun-ping Wang
Qiang Fu
spellingShingle Zi-shuo Han
Chun-ping Wang
Qiang Fu
Arbitrary-oriented target detection in large scene sar images
Defence Technology
Target detection
Convolutional neural network
Multilayer fusion
Context information
Synthetic aperture radar
author_facet Zi-shuo Han
Chun-ping Wang
Qiang Fu
author_sort Zi-shuo Han
title Arbitrary-oriented target detection in large scene sar images
title_short Arbitrary-oriented target detection in large scene sar images
title_full Arbitrary-oriented target detection in large scene sar images
title_fullStr Arbitrary-oriented target detection in large scene sar images
title_full_unstemmed Arbitrary-oriented target detection in large scene sar images
title_sort arbitrary-oriented target detection in large scene sar images
publisher KeAi Communications Co., Ltd.
series Defence Technology
issn 2214-9147
publishDate 2020-08-01
description Target detection in the field of synthetic aperture radar (SAR) has attracted considerable attention of researchers in national defense technology worldwide, owing to its unique advantages like high resolution and large scene image acquisition capabilities of SAR. However, due to strong speckle noise and low signal-to-noise ratio, it is difficult to extract representative features of target from SAR images, which greatly inhibits the effectiveness of traditional methods. In order to address the above problems, a framework called contextual rotation region-based convolutional neural network (RCNN) with multilayer fusion is proposed in this paper. Specifically, aimed to enable RCNN to perform target detection in large scene SAR images efficiently, maximum sliding strategy is applied to crop the large scene image into a series of sub-images before RCNN. Instead of using the highest-layer output for proposal generation and target detection, fusion feature maps with high resolution and rich semantic information are constructed by multilayer fusion strategy. Then, we put forwards rotation anchors to predict the minimum circumscribed rectangle of targets to reduce redundant detection region. Furthermore, shadow areas serve as contextual features to provide extraneous information for the detector identify and locate targets accurately. Experimental results on the simulated large scene SAR image dataset show that the proposed method achieves a satisfactory performance in large scene SAR target detection.
topic Target detection
Convolutional neural network
Multilayer fusion
Context information
Synthetic aperture radar
url http://www.sciencedirect.com/science/article/pii/S2214914719306968
work_keys_str_mv AT zishuohan arbitraryorientedtargetdetectioninlargescenesarimages
AT chunpingwang arbitraryorientedtargetdetectioninlargescenesarimages
AT qiangfu arbitraryorientedtargetdetectioninlargescenesarimages
_version_ 1721486769792221184