Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata
To detect the salient object in natural images with low contrast and complex backgrounds, a saliency detection method that fuses global and local information under multilayer cellular automata is proposed. First, a global saliency map was obtained by the iteratively trained convolutional neural netw...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8708313/ |
id |
doaj-37a78ea932e445ae971c334f51f2a711 |
---|---|
record_format |
Article |
spelling |
doaj-37a78ea932e445ae971c334f51f2a7112021-03-30T00:11:41ZengIEEEIEEE Access2169-35362019-01-017727367274810.1109/ACCESS.2019.29152618708313Saliency Detection Using Global and Local Information Under Multilayer Cellular AutomataYihang Liu0Peiyan Yuan1https://orcid.org/0000-0003-2019-7448College of Computer and Information Engineering, Henan Normal University, Xinxiang, ChinaCollege of Computer and Information Engineering, Henan Normal University, Xinxiang, ChinaTo detect the salient object in natural images with low contrast and complex backgrounds, a saliency detection method that fuses global and local information under multilayer cellular automata is proposed. First, a global saliency map was obtained by the iteratively trained convolutional neural network (CNN)-based encoder-decoder model. Moreover, to transmit high-level information to the lower-level layers and further reinforce the object edge, the skip connections and edge penalty term were added to the network. Second, the foreground and background codebooks were generated by the global saliency map, and sparse coding was subsequently obtained by the locality-constrained linear coding model. Thus, a local saliency map was generated. Finally, the final saliency map was obtained by fusing the global and local saliency maps under the multilayer cellular automata framework. The experimental results show that the average F-measure of our method on the MSRA 10K, ECSSD, DUT-OMRON, HKU-IS, THUR 15K, and XPIE datasets is 93.4%, 89.5%, 79.4%, 88.7%, 73.6%, and 85.2%, respectively, and the MAE is 0.046, 0.067, 0.054, 0.044, 0.072, and 0.049. Ultimately, these findings prove that our method has both high saliency detection accuracies and strong generalization abilities. In particular, our method can effectively detect the salient object of natural images with low contrast and complex backgrounds.https://ieeexplore.ieee.org/document/8708313/Saliency detectionglobal and local mapsmultilayer cellular automataCNN-based encoder-decoder modelsparse coding |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yihang Liu Peiyan Yuan |
spellingShingle |
Yihang Liu Peiyan Yuan Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata IEEE Access Saliency detection global and local maps multilayer cellular automata CNN-based encoder-decoder model sparse coding |
author_facet |
Yihang Liu Peiyan Yuan |
author_sort |
Yihang Liu |
title |
Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata |
title_short |
Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata |
title_full |
Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata |
title_fullStr |
Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata |
title_full_unstemmed |
Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata |
title_sort |
saliency detection using global and local information under multilayer cellular automata |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2019-01-01 |
description |
To detect the salient object in natural images with low contrast and complex backgrounds, a saliency detection method that fuses global and local information under multilayer cellular automata is proposed. First, a global saliency map was obtained by the iteratively trained convolutional neural network (CNN)-based encoder-decoder model. Moreover, to transmit high-level information to the lower-level layers and further reinforce the object edge, the skip connections and edge penalty term were added to the network. Second, the foreground and background codebooks were generated by the global saliency map, and sparse coding was subsequently obtained by the locality-constrained linear coding model. Thus, a local saliency map was generated. Finally, the final saliency map was obtained by fusing the global and local saliency maps under the multilayer cellular automata framework. The experimental results show that the average F-measure of our method on the MSRA 10K, ECSSD, DUT-OMRON, HKU-IS, THUR 15K, and XPIE datasets is 93.4%, 89.5%, 79.4%, 88.7%, 73.6%, and 85.2%, respectively, and the MAE is 0.046, 0.067, 0.054, 0.044, 0.072, and 0.049. Ultimately, these findings prove that our method has both high saliency detection accuracies and strong generalization abilities. In particular, our method can effectively detect the salient object of natural images with low contrast and complex backgrounds. |
topic |
Saliency detection global and local maps multilayer cellular automata CNN-based encoder-decoder model sparse coding |
url |
https://ieeexplore.ieee.org/document/8708313/ |
work_keys_str_mv |
AT yihangliu saliencydetectionusingglobalandlocalinformationundermultilayercellularautomata AT peiyanyuan saliencydetectionusingglobalandlocalinformationundermultilayercellularautomata |
_version_ |
1724188532209090560 |