Video Object Detection Guided by Object Blur Evaluation
In recent years, the excellent image-based object detection algorithms are transferred to the video object detection directly. These frame-by-frame processing methods are suboptimal owing to the degenerate object appearance such as motion blur, defocus and rare poses. The existing works for video ob...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9262895/ |
id |
doaj-4a9efebf8e7a4ed0a366acb886bbf067 |
---|---|
record_format |
Article |
spelling |
doaj-4a9efebf8e7a4ed0a366acb886bbf0672021-03-30T04:33:22ZengIEEEIEEE Access2169-35362020-01-01820855420856510.1109/ACCESS.2020.30389139262895Video Object Detection Guided by Object Blur EvaluationYujie Wu0https://orcid.org/0000-0002-7797-110XHong Zhang1https://orcid.org/0000-0002-1282-3755Yawei Li2https://orcid.org/0000-0002-6192-2688Yifan Yang3Ding Yuan4Image Processing Center, Beihang University, Beijing, ChinaImage Processing Center, Beihang University, Beijing, ChinaImage Processing Center, Beihang University, Beijing, ChinaImage Processing Center, Beihang University, Beijing, ChinaImage Processing Center, Beihang University, Beijing, ChinaIn recent years, the excellent image-based object detection algorithms are transferred to the video object detection directly. These frame-by-frame processing methods are suboptimal owing to the degenerate object appearance such as motion blur, defocus and rare poses. The existing works for video object detection mostly focus on the feature aggregation at pixel level and instance level, but the blur impact in the aggregation process has not been exploited well so far. In this article, we propose an end-to-end blur-aid feature aggregation network (BFAN) for video object detection. The proposed BFAN focuses on the aggregation process influenced by the blur including motion blur and defocus with high accuracy and little increased computation. In BFAN, we evaluate the object blur degree of each frame as the weight for aggregation. Noteworthy, the background is usually flat which has a negative impact on the object blur degree evaluation. Therefore, we introduce a light saliency detection network to alleviate the background interference. The experiments conducted on the ImageNet VID dataset show that BFAN achieves the state-of-the-art detection performance, exactly 79.1% mAP, with 3 points improvement compared to the video object detection baseline.https://ieeexplore.ieee.org/document/9262895/Video object detectionobject blur degree evaluationsaliency detection |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yujie Wu Hong Zhang Yawei Li Yifan Yang Ding Yuan |
spellingShingle |
Yujie Wu Hong Zhang Yawei Li Yifan Yang Ding Yuan Video Object Detection Guided by Object Blur Evaluation IEEE Access Video object detection object blur degree evaluation saliency detection |
author_facet |
Yujie Wu Hong Zhang Yawei Li Yifan Yang Ding Yuan |
author_sort |
Yujie Wu |
title |
Video Object Detection Guided by Object Blur Evaluation |
title_short |
Video Object Detection Guided by Object Blur Evaluation |
title_full |
Video Object Detection Guided by Object Blur Evaluation |
title_fullStr |
Video Object Detection Guided by Object Blur Evaluation |
title_full_unstemmed |
Video Object Detection Guided by Object Blur Evaluation |
title_sort |
video object detection guided by object blur evaluation |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
In recent years, the excellent image-based object detection algorithms are transferred to the video object detection directly. These frame-by-frame processing methods are suboptimal owing to the degenerate object appearance such as motion blur, defocus and rare poses. The existing works for video object detection mostly focus on the feature aggregation at pixel level and instance level, but the blur impact in the aggregation process has not been exploited well so far. In this article, we propose an end-to-end blur-aid feature aggregation network (BFAN) for video object detection. The proposed BFAN focuses on the aggregation process influenced by the blur including motion blur and defocus with high accuracy and little increased computation. In BFAN, we evaluate the object blur degree of each frame as the weight for aggregation. Noteworthy, the background is usually flat which has a negative impact on the object blur degree evaluation. Therefore, we introduce a light saliency detection network to alleviate the background interference. The experiments conducted on the ImageNet VID dataset show that BFAN achieves the state-of-the-art detection performance, exactly 79.1% mAP, with 3 points improvement compared to the video object detection baseline. |
topic |
Video object detection object blur degree evaluation saliency detection |
url |
https://ieeexplore.ieee.org/document/9262895/ |
work_keys_str_mv |
AT yujiewu videoobjectdetectionguidedbyobjectblurevaluation AT hongzhang videoobjectdetectionguidedbyobjectblurevaluation AT yaweili videoobjectdetectionguidedbyobjectblurevaluation AT yifanyang videoobjectdetectionguidedbyobjectblurevaluation AT dingyuan videoobjectdetectionguidedbyobjectblurevaluation |
_version_ |
1724181639863468032 |