Deep ChaosNet for Action Recognition in Videos

Current methods of chaos-based action recognition in videos are limited to the artificial feature causing the low recognition accuracy. In this paper, we improve ChaosNet to the deep neural network and apply it to action recognition. First, we extend ChaosNet to deep ChaosNet for extracting action f...

Full description

Bibliographic Details
Main Authors: Huafeng Chen, Maosheng Zhang, Zhengming Gao, Yunhong Zhao
Format: Article
Language:English
Published: Hindawi-Wiley 2021-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2021/6634156
Description
Summary:Current methods of chaos-based action recognition in videos are limited to the artificial feature causing the low recognition accuracy. In this paper, we improve ChaosNet to the deep neural network and apply it to action recognition. First, we extend ChaosNet to deep ChaosNet for extracting action features. Then, we send the features to the low-level LSTM encoder and high-level LSTM encoder for obtaining low-level coding output and high-level coding results, respectively. The agent is a behavior recognizer for producing recognition results. The manager is a hidden layer, responsible for giving behavioral segmentation targets at the high level. Our experiments are executed on two standard action datasets: UCF101 and HMDB51. The experimental results show that the proposed algorithm outperforms the state of the art.
ISSN:1099-0526