Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones

Many human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions. such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation...

Full description

Bibliographic Details
Main Authors: Hymalai Bello, Bo Zhou, Paul Lukowicz
Format: Article
Language:English
Published: MDPI AG 2020-08-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/17/4904
id doaj-23d3cccd7a27475ab0e3734ed5677540
record_format Article
spelling doaj-23d3cccd7a27475ab0e3734ed56775402020-11-25T03:01:11ZengMDPI AGSensors1424-82202020-08-01204904490410.3390/s20174904Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-MicrophonesHymalai Bello0Bo Zhou1Paul Lukowicz2German Research Center for Artificial Intelligence(DFKI), 67663 Kaiserslautern, GermanyGerman Research Center for Artificial Intelligence(DFKI), 67663 Kaiserslautern, GermanyGerman Research Center for Artificial Intelligence(DFKI), 67663 Kaiserslautern, GermanyMany human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions. such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation of a wearable system for facial muscle activity monitoring based on a re-configurable differential array of stethoscope-microphones. In our system, six stethoscopes are placed at locations that could easily be integrated into the frame of smart glasses. The paper describes the detailed hardware design and selection and adaptation of appropriate signal processing and machine learning methods. For the evaluation, we asked eight participants to imitate a set of facial actions, such as expressions of happiness, anger, surprise, sadness, upset, and disgust, and gestures, like kissing, winkling, sticking the tongue out, and taking a pill. An evaluation of a complete data set of 2640 events with 66% training and a 33% testing rate has been performed. Although we encountered high variability of the volunteers’ expressions, our approach shows a recall = 55%, precision = 56%, and f1-score of 54% for the user-independent scenario(9% chance-level). On a user-dependent basis, our worst result has an f1-score = 60% and best result with f1-score = 89%. Having a recall <inline-formula><math display="inline"><semantics><mrow><mo>≥</mo><mn>60</mn><mo>%</mo></mrow></semantics></math></inline-formula> for expressions like happiness, anger, kissing, sticking the tongue out, and neutral(Null-class).https://www.mdpi.com/1424-8220/20/17/4904head mounted sensorsmicrophone-arraygesture recognitionwearable sensorssound mechanomyography
collection DOAJ
language English
format Article
sources DOAJ
author Hymalai Bello
Bo Zhou
Paul Lukowicz
spellingShingle Hymalai Bello
Bo Zhou
Paul Lukowicz
Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
Sensors
head mounted sensors
microphone-array
gesture recognition
wearable sensors
sound mechanomyography
author_facet Hymalai Bello
Bo Zhou
Paul Lukowicz
author_sort Hymalai Bello
title Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
title_short Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
title_full Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
title_fullStr Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
title_full_unstemmed Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones
title_sort facial muscle activity recognition with reconfigurable differential stethoscope-microphones
publisher MDPI AG
series Sensors
issn 1424-8220
publishDate 2020-08-01
description Many human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions. such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation of a wearable system for facial muscle activity monitoring based on a re-configurable differential array of stethoscope-microphones. In our system, six stethoscopes are placed at locations that could easily be integrated into the frame of smart glasses. The paper describes the detailed hardware design and selection and adaptation of appropriate signal processing and machine learning methods. For the evaluation, we asked eight participants to imitate a set of facial actions, such as expressions of happiness, anger, surprise, sadness, upset, and disgust, and gestures, like kissing, winkling, sticking the tongue out, and taking a pill. An evaluation of a complete data set of 2640 events with 66% training and a 33% testing rate has been performed. Although we encountered high variability of the volunteers’ expressions, our approach shows a recall = 55%, precision = 56%, and f1-score of 54% for the user-independent scenario(9% chance-level). On a user-dependent basis, our worst result has an f1-score = 60% and best result with f1-score = 89%. Having a recall <inline-formula><math display="inline"><semantics><mrow><mo>≥</mo><mn>60</mn><mo>%</mo></mrow></semantics></math></inline-formula> for expressions like happiness, anger, kissing, sticking the tongue out, and neutral(Null-class).
topic head mounted sensors
microphone-array
gesture recognition
wearable sensors
sound mechanomyography
url https://www.mdpi.com/1424-8220/20/17/4904
work_keys_str_mv AT hymalaibello facialmuscleactivityrecognitionwithreconfigurabledifferentialstethoscopemicrophones
AT bozhou facialmuscleactivityrecognitionwithreconfigurabledifferentialstethoscopemicrophones
AT paullukowicz facialmuscleactivityrecognitionwithreconfigurabledifferentialstethoscopemicrophones
_version_ 1724694521023823872