Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems

<p><em>In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL), and combines different techniques related to Artificial Intelligence, Natural Lan...

Full description

Bibliographic Details
Main Authors: David GRIOL, Jesús GARCÍA-HERRERO, José Manuel MOLINA
Format: Article
Language:English
Published: Ediciones Universidad de Salamanca 2013-11-01
Series:Advances in Distributed Computing and Artificial Intelligence Journal
Subjects:
Online Access:https://revistas.usal.es/index.php/2255-2863/article/view/11279
id doaj-ccce186766154df5b55627bfc0eddfce
record_format Article
spelling doaj-ccce186766154df5b55627bfc0eddfce2020-11-25T02:46:59ZengEdiciones Universidad de SalamancaAdvances in Distributed Computing and Artificial Intelligence Journal2255-28632013-11-0123375310.14201/ADCAIJ201426375310707Combining heterogeneous inputs for the development of adaptive and multimodal interaction systemsDavid GRIOL0Jesús GARCÍA-HERRERO1José Manuel MOLINA2BISITE Research GroupCarlos III University of MadridCarlos III University of Madrid<p><em>In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL), and combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the framework integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, enhanced conversational agents facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows modeling the user conversational behavior, which is learned from an initial corpus and improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.</em></p>https://revistas.usal.es/index.php/2255-2863/article/view/11279software agentsmultimodalfusionvisualsensornetworkssurveillance applicationsspoken interactionconversational agentsuser modelingdialog management
collection DOAJ
language English
format Article
sources DOAJ
author David GRIOL
Jesús GARCÍA-HERRERO
José Manuel MOLINA
spellingShingle David GRIOL
Jesús GARCÍA-HERRERO
José Manuel MOLINA
Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
Advances in Distributed Computing and Artificial Intelligence Journal
software agents
multimodalfusion
visualsensornetworks
surveillance applications
spoken interaction
conversational agents
user modeling
dialog management
author_facet David GRIOL
Jesús GARCÍA-HERRERO
José Manuel MOLINA
author_sort David GRIOL
title Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
title_short Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
title_full Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
title_fullStr Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
title_full_unstemmed Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
title_sort combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
publisher Ediciones Universidad de Salamanca
series Advances in Distributed Computing and Artificial Intelligence Journal
issn 2255-2863
publishDate 2013-11-01
description <p><em>In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL), and combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the framework integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, enhanced conversational agents facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows modeling the user conversational behavior, which is learned from an initial corpus and improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.</em></p>
topic software agents
multimodalfusion
visualsensornetworks
surveillance applications
spoken interaction
conversational agents
user modeling
dialog management
url https://revistas.usal.es/index.php/2255-2863/article/view/11279
work_keys_str_mv AT davidgriol combiningheterogeneousinputsforthedevelopmentofadaptiveandmultimodalinteractionsystems
AT jesusgarciaherrero combiningheterogeneousinputsforthedevelopmentofadaptiveandmultimodalinteractionsystems
AT josemanuelmolina combiningheterogeneousinputsforthedevelopmentofadaptiveandmultimodalinteractionsystems
_version_ 1724755463581466624