Abnormal Event Detection in Videos Based on Deep Neural Networks

Abnormal event detection has attracted widespread attention due to its importance in video surveillance scenarios. The lack of abnormally labeled samples makes this problem more difficult to solve. A partially supervised learning method only using normal samples to train the detection model for vide...

Full description

Bibliographic Details
Main Author: Qinmin Ma
Format: Article
Language:English
Published: Hindawi Limited 2021-01-01
Series:Scientific Programming
Online Access:http://dx.doi.org/10.1155/2021/6412608
id doaj-f5764c84d4b14eccae2dd7f17bf2dce3
record_format Article
spelling doaj-f5764c84d4b14eccae2dd7f17bf2dce32021-08-16T00:00:06ZengHindawi LimitedScientific Programming1875-919X2021-01-01202110.1155/2021/6412608Abnormal Event Detection in Videos Based on Deep Neural NetworksQinmin Ma0School of Artificial IntelligenceAbnormal event detection has attracted widespread attention due to its importance in video surveillance scenarios. The lack of abnormally labeled samples makes this problem more difficult to solve. A partially supervised learning method only using normal samples to train the detection model for video abnormal event detection and location is proposed. Assuming that the distribution of all normal samples complies to the Gaussian distribution, the abnormal sample will appear with a lower probability in this Gaussian distribution. The method is developed based on the variational autoencoder (VAE), through end-to-end deep learning technology, which constrains the hidden layer representation of the normal sample to a Gaussian distribution. Given the test sample, its hidden layer representation is obtained through the variational autoencoder, which represents the probability of belonging to the Gaussian distribution. It is judged abnormal or not according to the detection threshold. Based on two publicly available datasets, i.e., UCSD dataset and Avenue dataset, the experimental are conducted. The results show that the proposed method achieves 92.3% and 82.1% frame-level AUC at a speed of 571 frames per second on average, which demonstrate the effectiveness and efficiency of our framework compared with other state-of-the-art approaches.http://dx.doi.org/10.1155/2021/6412608
collection DOAJ
language English
format Article
sources DOAJ
author Qinmin Ma
spellingShingle Qinmin Ma
Abnormal Event Detection in Videos Based on Deep Neural Networks
Scientific Programming
author_facet Qinmin Ma
author_sort Qinmin Ma
title Abnormal Event Detection in Videos Based on Deep Neural Networks
title_short Abnormal Event Detection in Videos Based on Deep Neural Networks
title_full Abnormal Event Detection in Videos Based on Deep Neural Networks
title_fullStr Abnormal Event Detection in Videos Based on Deep Neural Networks
title_full_unstemmed Abnormal Event Detection in Videos Based on Deep Neural Networks
title_sort abnormal event detection in videos based on deep neural networks
publisher Hindawi Limited
series Scientific Programming
issn 1875-919X
publishDate 2021-01-01
description Abnormal event detection has attracted widespread attention due to its importance in video surveillance scenarios. The lack of abnormally labeled samples makes this problem more difficult to solve. A partially supervised learning method only using normal samples to train the detection model for video abnormal event detection and location is proposed. Assuming that the distribution of all normal samples complies to the Gaussian distribution, the abnormal sample will appear with a lower probability in this Gaussian distribution. The method is developed based on the variational autoencoder (VAE), through end-to-end deep learning technology, which constrains the hidden layer representation of the normal sample to a Gaussian distribution. Given the test sample, its hidden layer representation is obtained through the variational autoencoder, which represents the probability of belonging to the Gaussian distribution. It is judged abnormal or not according to the detection threshold. Based on two publicly available datasets, i.e., UCSD dataset and Avenue dataset, the experimental are conducted. The results show that the proposed method achieves 92.3% and 82.1% frame-level AUC at a speed of 571 frames per second on average, which demonstrate the effectiveness and efficiency of our framework compared with other state-of-the-art approaches.
url http://dx.doi.org/10.1155/2021/6412608
work_keys_str_mv AT qinminma abnormaleventdetectioninvideosbasedondeepneuralnetworks
_version_ 1721206362591985664