No “Self” Advantage for Audiovisual Speech Aftereffects

Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory “self” advantages. We assessed whether there is a “self” advantage for phonetic recalibra...

Full description

Bibliographic Details
Main Authors: Maria Modelska, Marie Pourquié, Martijn Baart
Format: Article
Language:English
Published: Frontiers Media S.A. 2019-03-01
Series:Frontiers in Psychology
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fpsyg.2019.00658/full
id doaj-8e655a5a89804ce8a2452852938ba1cf
record_format Article
spelling doaj-8e655a5a89804ce8a2452852938ba1cf2020-11-25T02:41:37ZengFrontiers Media S.A.Frontiers in Psychology1664-10782019-03-011010.3389/fpsyg.2019.00658425361No “Self” Advantage for Audiovisual Speech AftereffectsMaria Modelska0Marie Pourquié1Marie Pourquié2Martijn Baart3Martijn Baart4BCBL – Basque Center on Cognition, Brain and Language, Donostia, SpainBCBL – Basque Center on Cognition, Brain and Language, Donostia, SpainUPPA, IKER (UMR5478), Bayonne, FranceBCBL – Basque Center on Cognition, Brain and Language, Donostia, SpainDepartment of Cognitive Neuropsychology, Tilburg University, Tilburg, NetherlandsAlthough the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory “self” advantages. We assessed whether there is a “self” advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a “self” advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.https://www.frontiersin.org/article/10.3389/fpsyg.2019.00658/fullspeech perceptionself-advantagerecalibrationadaptationlip-reading
collection DOAJ
language English
format Article
sources DOAJ
author Maria Modelska
Marie Pourquié
Marie Pourquié
Martijn Baart
Martijn Baart
spellingShingle Maria Modelska
Marie Pourquié
Marie Pourquié
Martijn Baart
Martijn Baart
No “Self” Advantage for Audiovisual Speech Aftereffects
Frontiers in Psychology
speech perception
self-advantage
recalibration
adaptation
lip-reading
author_facet Maria Modelska
Marie Pourquié
Marie Pourquié
Martijn Baart
Martijn Baart
author_sort Maria Modelska
title No “Self” Advantage for Audiovisual Speech Aftereffects
title_short No “Self” Advantage for Audiovisual Speech Aftereffects
title_full No “Self” Advantage for Audiovisual Speech Aftereffects
title_fullStr No “Self” Advantage for Audiovisual Speech Aftereffects
title_full_unstemmed No “Self” Advantage for Audiovisual Speech Aftereffects
title_sort no “self” advantage for audiovisual speech aftereffects
publisher Frontiers Media S.A.
series Frontiers in Psychology
issn 1664-1078
publishDate 2019-03-01
description Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory “self” advantages. We assessed whether there is a “self” advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a “self” advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.
topic speech perception
self-advantage
recalibration
adaptation
lip-reading
url https://www.frontiersin.org/article/10.3389/fpsyg.2019.00658/full
work_keys_str_mv AT mariamodelska noselfadvantageforaudiovisualspeechaftereffects
AT mariepourquie noselfadvantageforaudiovisualspeechaftereffects
AT mariepourquie noselfadvantageforaudiovisualspeechaftereffects
AT martijnbaart noselfadvantageforaudiovisualspeechaftereffects
AT martijnbaart noselfadvantageforaudiovisualspeechaftereffects
_version_ 1724777621386952704