New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition

For the effective application of thriving human-assistive technologies in healthcare services and human–robot collaborative tasks, computing devices must be aware of human movements. Developing a reliable real-time activity recognition method for the continuous and smooth operation of such smart dev...

Full description

Bibliographic Details
Main Authors: Tsige Tadesse Alemayoh, Jae Hoon Lee, Shingo Okamoto
Format: Article
Language:English
Published: MDPI AG 2021-04-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/8/2814
id doaj-767efd1540fc46bb8b85fae3704bac30
record_format Article
spelling doaj-767efd1540fc46bb8b85fae3704bac302021-04-16T23:05:23ZengMDPI AGSensors1424-82202021-04-01212814281410.3390/s21082814New Sensor Data Structuring for Deeper Feature Extraction in Human Activity RecognitionTsige Tadesse Alemayoh0Jae Hoon Lee1Shingo Okamoto2Department of Mechanical Engineering, Graduate School of Science and Engineering, Ehime University, Matsuyama 790-8577, JapanDepartment of Mechanical Engineering, Graduate School of Science and Engineering, Ehime University, Matsuyama 790-8577, JapanDepartment of Mechanical Engineering, Graduate School of Science and Engineering, Ehime University, Matsuyama 790-8577, JapanFor the effective application of thriving human-assistive technologies in healthcare services and human–robot collaborative tasks, computing devices must be aware of human movements. Developing a reliable real-time activity recognition method for the continuous and smooth operation of such smart devices is imperative. To achieve this, light and intelligent methods that use ubiquitous sensors are pivotal. In this study, with the correlation of time series data in mind, a new method of data structuring for deeper feature extraction is introduced herein. The activity data were collected using a smartphone with the help of an exclusively developed iOS application. Data from eight activities were shaped into single and double-channels to extract deep temporal and spatial features of the signals. In addition to the time domain, raw data were represented via the Fourier and wavelet domains. Among the several neural network models used to fit the deep-learning classification of the activities, a convolutional neural network with a double-channeled time-domain input performed well. This method was further evaluated using other public datasets, and better performance was obtained. The practicability of the trained model was finally tested on a computer and a smartphone in real-time, where it demonstrated promising results.https://www.mdpi.com/1424-8220/21/8/2814human activity recognitioninertial measurement unit sensorsdeep learningconvolutional neural networkinput adaptation
collection DOAJ
language English
format Article
sources DOAJ
author Tsige Tadesse Alemayoh
Jae Hoon Lee
Shingo Okamoto
spellingShingle Tsige Tadesse Alemayoh
Jae Hoon Lee
Shingo Okamoto
New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition
Sensors
human activity recognition
inertial measurement unit sensors
deep learning
convolutional neural network
input adaptation
author_facet Tsige Tadesse Alemayoh
Jae Hoon Lee
Shingo Okamoto
author_sort Tsige Tadesse Alemayoh
title New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition
title_short New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition
title_full New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition
title_fullStr New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition
title_full_unstemmed New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition
title_sort new sensor data structuring for deeper feature extraction in human activity recognition
publisher MDPI AG
series Sensors
issn 1424-8220
publishDate 2021-04-01
description For the effective application of thriving human-assistive technologies in healthcare services and human–robot collaborative tasks, computing devices must be aware of human movements. Developing a reliable real-time activity recognition method for the continuous and smooth operation of such smart devices is imperative. To achieve this, light and intelligent methods that use ubiquitous sensors are pivotal. In this study, with the correlation of time series data in mind, a new method of data structuring for deeper feature extraction is introduced herein. The activity data were collected using a smartphone with the help of an exclusively developed iOS application. Data from eight activities were shaped into single and double-channels to extract deep temporal and spatial features of the signals. In addition to the time domain, raw data were represented via the Fourier and wavelet domains. Among the several neural network models used to fit the deep-learning classification of the activities, a convolutional neural network with a double-channeled time-domain input performed well. This method was further evaluated using other public datasets, and better performance was obtained. The practicability of the trained model was finally tested on a computer and a smartphone in real-time, where it demonstrated promising results.
topic human activity recognition
inertial measurement unit sensors
deep learning
convolutional neural network
input adaptation
url https://www.mdpi.com/1424-8220/21/8/2814
work_keys_str_mv AT tsigetadessealemayoh newsensordatastructuringfordeeperfeatureextractioninhumanactivityrecognition
AT jaehoonlee newsensordatastructuringfordeeperfeatureextractioninhumanactivityrecognition
AT shingookamoto newsensordatastructuringfordeeperfeatureextractioninhumanactivityrecognition
_version_ 1721524118246916096