A Deep Learning Approach for Human Activities Recognition From Multimodal Sensing Devices
Research in the recognition of human activities of daily living has significantly improved using deep learning techniques. Traditional human activity recognition techniques often use handcrafted features from heuristic processes from single sensing modality. The development of deep learning techniqu...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9209961/ |
Summary: | Research in the recognition of human activities of daily living has significantly improved using deep learning techniques. Traditional human activity recognition techniques often use handcrafted features from heuristic processes from single sensing modality. The development of deep learning techniques has addressed most of these problems by the automatic feature extraction from multimodal sensing devices to recognise activities accurately. In this paper, we propose a deep learning multi-channel architecture using a combination of convolutional neural network (CNN) and Bidirectional long short-term memory (BLSTM). The advantage of this model is that the CNN layers perform direct mapping and abstract representation of raw sensor inputs for feature extraction at different resolutions. The BLSTM layer takes full advantage of the forward and backward sequences to improve the extracted features for activity recognition significantly. We evaluate the proposed model on two publicly available datasets. The experimental results show that the proposed model performed considerably better than our baseline models and other models using the same datasets. It also demonstrates the suitability of the proposed model on multimodal sensing devices for enhanced human activity recognition. |
---|---|
ISSN: | 2169-3536 |