Human Fall Detection Based on Video Analysis and LRCN Model
碩士 === 國立臺灣科技大學 === 自動化及控制研究所 === 107 === In recent years, due to medical progress, human life has become longer and longer, coupled with the phenomenon of “less child” in the country, causing the population to age. The home-cared issue is often on the news page, and the fall is easy to cause damage...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/74353j |
id |
ndltd-TW-107NTUS5146009 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-107NTUS51460092019-10-23T05:46:03Z http://ndltd.ncl.edu.tw/handle/74353j Human Fall Detection Based on Video Analysis and LRCN Model 基於視訊分析及LRCN模型之人體跌倒偵測 CHAS-HSUN CHEN 陳昭勳 碩士 國立臺灣科技大學 自動化及控制研究所 107 In recent years, due to medical progress, human life has become longer and longer, coupled with the phenomenon of “less child” in the country, causing the population to age. The home-cared issue is often on the news page, and the fall is easy to cause damage to the elderly. Therefore, we use computer vision. And deep learning techniques to detect the inconvenience of movement or the activity status of the elderly, and to solve the problem of insufficient manpower in the nursing center. In view of the fact that previous theses often use manual tags to define the parameters of falls. This thesis uses two-dimension image vision technology with deep learning algorithms to propose a system that can automatically identify and detect falls of the elderly. The system structure of this thesis can be divided into three parts. Firstly, the image information of the moving object is obtained by Dense Optical Flow. Then use the Sliding Window technology to divide the big movie into many small video clips, each of clips have 10 images. Finally, the LRCN (CNN+RNN) hybrid model in depth learning technology is used to analyze the optical flow image and discriminate the occurrence of the fall event. The CNN extraction feature model uses the VGG16 architecture, while the RNN timing discrimination model uses the LSTM architecture as the hidden layer. Finally, we tune the hyperparameters and then use five cross-validation to check whether the model has overfitting problems or not. The experimental results show that the identification of the fall event can achieve sensitivity 99.76693% and accuracy 97.67701%. Chen-Hsiung Yang 楊振雄 2019 學位論文 ; thesis 72 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺灣科技大學 === 自動化及控制研究所 === 107 === In recent years, due to medical progress, human life has become longer and longer, coupled with the phenomenon of “less child” in the country, causing the population to age. The home-cared issue is often on the news page, and the fall is easy to cause damage to the elderly. Therefore, we use computer vision. And deep learning techniques to detect the inconvenience of movement or the activity status of the elderly, and to solve the problem of insufficient manpower in the nursing center.
In view of the fact that previous theses often use manual tags to define the parameters of falls. This thesis uses two-dimension image vision technology with deep learning algorithms to propose a system that can automatically identify and detect falls of the elderly.
The system structure of this thesis can be divided into three parts. Firstly, the image information of the moving object is obtained by Dense Optical Flow. Then use the Sliding Window technology to divide the big movie into many small video clips, each of clips have 10 images. Finally, the LRCN (CNN+RNN) hybrid model in depth learning technology is used to analyze the optical flow image and discriminate the occurrence of the fall event. The CNN extraction feature model uses the VGG16 architecture, while the RNN timing discrimination model uses the LSTM architecture as the hidden layer.
Finally, we tune the hyperparameters and then use five cross-validation to check whether the model has overfitting problems or not. The experimental results show that the identification of the fall event can achieve sensitivity 99.76693% and accuracy 97.67701%.
|
author2 |
Chen-Hsiung Yang |
author_facet |
Chen-Hsiung Yang CHAS-HSUN CHEN 陳昭勳 |
author |
CHAS-HSUN CHEN 陳昭勳 |
spellingShingle |
CHAS-HSUN CHEN 陳昭勳 Human Fall Detection Based on Video Analysis and LRCN Model |
author_sort |
CHAS-HSUN CHEN |
title |
Human Fall Detection Based on Video Analysis and LRCN Model |
title_short |
Human Fall Detection Based on Video Analysis and LRCN Model |
title_full |
Human Fall Detection Based on Video Analysis and LRCN Model |
title_fullStr |
Human Fall Detection Based on Video Analysis and LRCN Model |
title_full_unstemmed |
Human Fall Detection Based on Video Analysis and LRCN Model |
title_sort |
human fall detection based on video analysis and lrcn model |
publishDate |
2019 |
url |
http://ndltd.ncl.edu.tw/handle/74353j |
work_keys_str_mv |
AT chashsunchen humanfalldetectionbasedonvideoanalysisandlrcnmodel AT chénzhāoxūn humanfalldetectionbasedonvideoanalysisandlrcnmodel AT chashsunchen jīyúshìxùnfēnxījílrcnmóxíngzhīréntǐdiēdàozhēncè AT chénzhāoxūn jīyúshìxùnfēnxījílrcnmóxíngzhīréntǐdiēdàozhēncè |
_version_ |
1719276171754995712 |