Dynamic Aspects of Character Rendering in the Context of Multimodal Dialog Systems

Virtual characters offer a great potential as intuitive man-machine interface, because they allow simulating non-verbal communicative behavior as well, which requires the coordinated use of various modalities (e.g., speech and gesture). In this sense, multimodal dialogue systems extend current voice...

Full description

Bibliographic Details
Main Author: Jung, Yvonne A.
Format: Others
Language:English
en
Published: 2011
Online Access:http://tuprints.ulb.tu-darmstadt.de/2489/1/yjung_diss.pdf
Jung, Yvonne A. <http://tuprints.ulb.tu-darmstadt.de/view/person/Jung=3AYvonne_A=2E=3A=3A.html> : Dynamic Aspects of Character Rendering in the Context of Multimodal Dialog Systems. Technische Universität, Darmstadt [Ph.D. Thesis], (2011)
Description
Summary:Virtual characters offer a great potential as intuitive man-machine interface, because they allow simulating non-verbal communicative behavior as well, which requires the coordinated use of various modalities (e.g., speech and gesture). In this sense, multimodal dialogue systems extend current voice response systems, like for instance known from automated support hotlines, to other modalities. While multimodal dialogue systems are an active research area in artificial intelligence (AI) for over twenty years, in computer graphics (CG) further research is still needed. Hence, in this work two basic problems have been identified. On the one hand, there is a gap between the components of AI and CG, which makes it difficult to provide responsive characters in a manageable manner. On the other hand, embedding virtual agents in full 3D applications, particularly in the context of Mixed Reality, still remains problematic. Therefore, in this work a concept for the presentation component of multimodal dialogue systems has been presented, which can be easily integrated into current frameworks for virtual agents. Basically, it consists of a declarative control layer and a declarative execution layer. While the control layer is mainly used for communication with the AI modules, it also provides a declarative language for describing and flexibly controlling communicative behavior. Core technology components are provided in the execution layer. These include a flexible animation system that is integrated into the X3D standard, components for hair simulation, to represent psycho-physiologically caused skin tone changes such as blushing and the simulation of tears, furthermore methods for the declarative control of the virtual camera, and techniques for the realistic visualization of virtual objects in Mixed Reality scenarios. In addition to simplifying the integration into complex 3D applications, the whole environment can thus also be used by the system as another means of communication.