Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata

To detect the salient object in natural images with low contrast and complex backgrounds, a saliency detection method that fuses global and local information under multilayer cellular automata is proposed. First, a global saliency map was obtained by the iteratively trained convolutional neural netw...

Full description

Bibliographic Details
Main Authors: Yihang Liu, Peiyan Yuan
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8708313/
Description
Summary:To detect the salient object in natural images with low contrast and complex backgrounds, a saliency detection method that fuses global and local information under multilayer cellular automata is proposed. First, a global saliency map was obtained by the iteratively trained convolutional neural network (CNN)-based encoder-decoder model. Moreover, to transmit high-level information to the lower-level layers and further reinforce the object edge, the skip connections and edge penalty term were added to the network. Second, the foreground and background codebooks were generated by the global saliency map, and sparse coding was subsequently obtained by the locality-constrained linear coding model. Thus, a local saliency map was generated. Finally, the final saliency map was obtained by fusing the global and local saliency maps under the multilayer cellular automata framework. The experimental results show that the average F-measure of our method on the MSRA 10K, ECSSD, DUT-OMRON, HKU-IS, THUR 15K, and XPIE datasets is 93.4%, 89.5%, 79.4%, 88.7%, 73.6%, and 85.2%, respectively, and the MAE is 0.046, 0.067, 0.054, 0.044, 0.072, and 0.049. Ultimately, these findings prove that our method has both high saliency detection accuracies and strong generalization abilities. In particular, our method can effectively detect the salient object of natural images with low contrast and complex backgrounds.
ISSN:2169-3536