Unfolding and dynamics of affect bursts decoding in humans.

The unfolding dynamics of the vocal expression of emotions are crucial for the decoding of the emotional state of an individual. In this study, we analyzed how much information is needed to decode a vocally expressed emotion using affect bursts, a gating paradigm, and linear mixed models. We showed...

Full description

Bibliographic Details
Main Authors: Simon Schaerlaeken, Didier Grandjean
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2018-01-01
Series:PLoS ONE
Online Access:http://europepmc.org/articles/PMC6207317?pdf=render
id doaj-81b0824ad0bb4789b29f6cf3f33b8cdd
record_format Article
spelling doaj-81b0824ad0bb4789b29f6cf3f33b8cdd2020-11-25T01:28:19ZengPublic Library of Science (PLoS)PLoS ONE1932-62032018-01-011310e020621610.1371/journal.pone.0206216Unfolding and dynamics of affect bursts decoding in humans.Simon SchaerlaekenDidier GrandjeanThe unfolding dynamics of the vocal expression of emotions are crucial for the decoding of the emotional state of an individual. In this study, we analyzed how much information is needed to decode a vocally expressed emotion using affect bursts, a gating paradigm, and linear mixed models. We showed that some emotions (fear, anger, disgust) were significantly better recognized at full-duration than others (joy, sadness, neutral). As predicted, recognition improved when greater proportion of the stimuli was presented. Emotion recognition curves for anger and disgust were best described by higher order polynomials (second to third), while fear, sadness, neutral, and joy were best described by linear relationships. Acoustic features were extracted for each stimulus and subjected to a principal component analysis for each emotion. The principal components were successfully used to partially predict the accuracy of recognition (i.e., for anger, a component encompassing acoustic features such as fundamental frequency (f0) and jitter; for joy, pitch and loudness range). Furthermore, the impact of the principal components on the recognition of anger, disgust, and sadness changed with longer portions being presented. These results support the importance of studying the unfolding conscious recognition of emotional vocalizations to reveal the differential contributions of specific acoustical feature sets. It is likely that these effects are due to the relevance of threatening information to the human mind and are related to urgent motor responses when people are exposed to potential threats as compared with emotions where no such urgent response is required (e.g., joy).http://europepmc.org/articles/PMC6207317?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Simon Schaerlaeken
Didier Grandjean
spellingShingle Simon Schaerlaeken
Didier Grandjean
Unfolding and dynamics of affect bursts decoding in humans.
PLoS ONE
author_facet Simon Schaerlaeken
Didier Grandjean
author_sort Simon Schaerlaeken
title Unfolding and dynamics of affect bursts decoding in humans.
title_short Unfolding and dynamics of affect bursts decoding in humans.
title_full Unfolding and dynamics of affect bursts decoding in humans.
title_fullStr Unfolding and dynamics of affect bursts decoding in humans.
title_full_unstemmed Unfolding and dynamics of affect bursts decoding in humans.
title_sort unfolding and dynamics of affect bursts decoding in humans.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2018-01-01
description The unfolding dynamics of the vocal expression of emotions are crucial for the decoding of the emotional state of an individual. In this study, we analyzed how much information is needed to decode a vocally expressed emotion using affect bursts, a gating paradigm, and linear mixed models. We showed that some emotions (fear, anger, disgust) were significantly better recognized at full-duration than others (joy, sadness, neutral). As predicted, recognition improved when greater proportion of the stimuli was presented. Emotion recognition curves for anger and disgust were best described by higher order polynomials (second to third), while fear, sadness, neutral, and joy were best described by linear relationships. Acoustic features were extracted for each stimulus and subjected to a principal component analysis for each emotion. The principal components were successfully used to partially predict the accuracy of recognition (i.e., for anger, a component encompassing acoustic features such as fundamental frequency (f0) and jitter; for joy, pitch and loudness range). Furthermore, the impact of the principal components on the recognition of anger, disgust, and sadness changed with longer portions being presented. These results support the importance of studying the unfolding conscious recognition of emotional vocalizations to reveal the differential contributions of specific acoustical feature sets. It is likely that these effects are due to the relevance of threatening information to the human mind and are related to urgent motor responses when people are exposed to potential threats as compared with emotions where no such urgent response is required (e.g., joy).
url http://europepmc.org/articles/PMC6207317?pdf=render
work_keys_str_mv AT simonschaerlaeken unfoldinganddynamicsofaffectburstsdecodinginhumans
AT didiergrandjean unfoldinganddynamicsofaffectburstsdecodinginhumans
_version_ 1725102413055000576