Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters
Recent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign em...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2011-10-01
|
Series: | i-Perception |
Online Access: | https://doi.org/10.1068/ic774 |
id |
doaj-442e0f84a6e3478384869bde17a978ac |
---|---|
record_format |
Article |
spelling |
doaj-442e0f84a6e3478384869bde17a978ac2020-11-25T03:34:05ZengSAGE Publishingi-Perception2041-66952011-10-01210.1068/ic77410.1068_ic774Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual CharactersEkaterina Volkova0Betty Mohler1Sally Linkenauger2Ivelina Alexandrova3Heinrich H Bülthoff4Max Planck Institute for Biological CyberneticsMax Planck Institute for Biological CyberneticsMax Planck Institute for Biological CyberneticsMax Planck Institute for Biological CyberneticsMax Planck Institute for Biological Cybernetics, Korea UniversityRecent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign emotional categories to appropriate stretches of text for a simulation of those social scenarios where verbal communication is important. All this makes virtual characters a valuable tool for creation of versatile stimuli for research on the integration of emotion information from different modalities. We conducted an audio-visual experiment to investigate the differential contributions of emotional speech and facial expressions on emotion identification. We used recorded and synthesized speech as well as dynamic virtual faces, all enhanced for seven emotional categories. The participants were asked to recognize the prevalent emotion of paired faces and audio. Results showed that when the voice was recorded, the vocalized emotion influenced participants' emotion identification more than the facial expression. However, when the voice was synthesized, facial expression influenced participants' emotion identification more than vocalized emotion. Additionally, individuals did worse on identifying either the facial expression or vocalized emotion when the voice was synthesized. Our experimental method can help to determine how to improve synthesized emotional speech.https://doi.org/10.1068/ic774 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ekaterina Volkova Betty Mohler Sally Linkenauger Ivelina Alexandrova Heinrich H Bülthoff |
spellingShingle |
Ekaterina Volkova Betty Mohler Sally Linkenauger Ivelina Alexandrova Heinrich H Bülthoff Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters i-Perception |
author_facet |
Ekaterina Volkova Betty Mohler Sally Linkenauger Ivelina Alexandrova Heinrich H Bülthoff |
author_sort |
Ekaterina Volkova |
title |
Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters |
title_short |
Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters |
title_full |
Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters |
title_fullStr |
Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters |
title_full_unstemmed |
Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters |
title_sort |
contribution of prosody in audio-visual integration to emotional perception of virtual characters |
publisher |
SAGE Publishing |
series |
i-Perception |
issn |
2041-6695 |
publishDate |
2011-10-01 |
description |
Recent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign emotional categories to appropriate stretches of text for a simulation of those social scenarios where verbal communication is important. All this makes virtual characters a valuable tool for creation of versatile stimuli for research on the integration of emotion information from different modalities. We conducted an audio-visual experiment to investigate the differential contributions of emotional speech and facial expressions on emotion identification. We used recorded and synthesized speech as well as dynamic virtual faces, all enhanced for seven emotional categories. The participants were asked to recognize the prevalent emotion of paired faces and audio. Results showed that when the voice was recorded, the vocalized emotion influenced participants' emotion identification more than the facial expression. However, when the voice was synthesized, facial expression influenced participants' emotion identification more than vocalized emotion. Additionally, individuals did worse on identifying either the facial expression or vocalized emotion when the voice was synthesized. Our experimental method can help to determine how to improve synthesized emotional speech. |
url |
https://doi.org/10.1068/ic774 |
work_keys_str_mv |
AT ekaterinavolkova contributionofprosodyinaudiovisualintegrationtoemotionalperceptionofvirtualcharacters AT bettymohler contributionofprosodyinaudiovisualintegrationtoemotionalperceptionofvirtualcharacters AT sallylinkenauger contributionofprosodyinaudiovisualintegrationtoemotionalperceptionofvirtualcharacters AT ivelinaalexandrova contributionofprosodyinaudiovisualintegrationtoemotionalperceptionofvirtualcharacters AT heinrichhbulthoff contributionofprosodyinaudiovisualintegrationtoemotionalperceptionofvirtualcharacters |
_version_ |
1724560721880023040 |