MuMIA: Multimodal Interactions to Better Understand Art Contexts
Cultural heritage is a challenging domain of application for novel interactive technologies, where varying aspects in the way that cultural assets are delivered play a major role in enhancing the visitor experience, either onsite or online. Technology-supported natural human–computer interaction tha...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-03-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/11/6/2695 |
id |
doaj-fd28053e311548759f3ca6219fbc295e |
---|---|
record_format |
Article |
spelling |
doaj-fd28053e311548759f3ca6219fbc295e2021-03-18T00:04:47ZengMDPI AGApplied Sciences2076-34172021-03-01112695269510.3390/app11062695MuMIA: Multimodal Interactions to Better Understand Art ContextsGeorge E. Raptis0Giannis Kavvetsos1Christina Katsini2Human Opsis, Patras, 26500 Western Greece, GreeceHuman Opsis, Patras, 26500 Western Greece, GreeceHuman Opsis, Patras, 26500 Western Greece, GreeceCultural heritage is a challenging domain of application for novel interactive technologies, where varying aspects in the way that cultural assets are delivered play a major role in enhancing the visitor experience, either onsite or online. Technology-supported natural human–computer interaction that is based on multimodalities is a key factor in enabling wider and enriched access to cultural heritage assets. In this paper, we present the design and evaluation of an interactive system that aims to support visitors towards a better understanding of art contexts through the use of a multimodal interface, based on visual and audio interactions. The results of the evaluation study shed light on the dimensions of evoking natural interactions within cultural heritage environments, using micro-narratives for self-exploration and understanding of cultural content, and the intersection between human–computer interaction and artificial intelligence within cultural heritage. We expect our findings to provide useful insights for practitioners and researchers of the broad human–computer interaction and cultural heritage communities on designing and evaluating multimodal interfaces to better support visitor experiences.https://www.mdpi.com/2076-3417/11/6/2695human–computer interactionmultimodal interactionseye trackingvoicecultural heritagemuseum |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
George E. Raptis Giannis Kavvetsos Christina Katsini |
spellingShingle |
George E. Raptis Giannis Kavvetsos Christina Katsini MuMIA: Multimodal Interactions to Better Understand Art Contexts Applied Sciences human–computer interaction multimodal interactions eye tracking voice cultural heritage museum |
author_facet |
George E. Raptis Giannis Kavvetsos Christina Katsini |
author_sort |
George E. Raptis |
title |
MuMIA: Multimodal Interactions to Better Understand Art Contexts |
title_short |
MuMIA: Multimodal Interactions to Better Understand Art Contexts |
title_full |
MuMIA: Multimodal Interactions to Better Understand Art Contexts |
title_fullStr |
MuMIA: Multimodal Interactions to Better Understand Art Contexts |
title_full_unstemmed |
MuMIA: Multimodal Interactions to Better Understand Art Contexts |
title_sort |
mumia: multimodal interactions to better understand art contexts |
publisher |
MDPI AG |
series |
Applied Sciences |
issn |
2076-3417 |
publishDate |
2021-03-01 |
description |
Cultural heritage is a challenging domain of application for novel interactive technologies, where varying aspects in the way that cultural assets are delivered play a major role in enhancing the visitor experience, either onsite or online. Technology-supported natural human–computer interaction that is based on multimodalities is a key factor in enabling wider and enriched access to cultural heritage assets. In this paper, we present the design and evaluation of an interactive system that aims to support visitors towards a better understanding of art contexts through the use of a multimodal interface, based on visual and audio interactions. The results of the evaluation study shed light on the dimensions of evoking natural interactions within cultural heritage environments, using micro-narratives for self-exploration and understanding of cultural content, and the intersection between human–computer interaction and artificial intelligence within cultural heritage. We expect our findings to provide useful insights for practitioners and researchers of the broad human–computer interaction and cultural heritage communities on designing and evaluating multimodal interfaces to better support visitor experiences. |
topic |
human–computer interaction multimodal interactions eye tracking voice cultural heritage museum |
url |
https://www.mdpi.com/2076-3417/11/6/2695 |
work_keys_str_mv |
AT georgeeraptis mumiamultimodalinteractionstobetterunderstandartcontexts AT gianniskavvetsos mumiamultimodalinteractionstobetterunderstandartcontexts AT christinakatsini mumiamultimodalinteractionstobetterunderstandartcontexts |
_version_ |
1724218014471028736 |