Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.

Infant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces...

Full description

Bibliographic Details
Main Authors: Christine Kitamura, Bahia Guellaï, Jeesun Kim
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2014-01-01
Series:PLoS ONE
Online Access:http://europepmc.org/articles/PMC4213016?pdf=render
id doaj-66c74733a0a644bf9ef50ec31e2d3471
record_format Article
spelling doaj-66c74733a0a644bf9ef50ec31e2d34712020-11-25T02:54:00ZengPublic Library of Science (PLoS)PLoS ONE1932-62032014-01-01910e11146710.1371/journal.pone.0111467Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.Christine KitamuraBahia GuellaïJeesun KimInfant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces speaking different ID sentences, and a single vocal-only sentence matched to one of the displays. Infants looked longer to the matched than mismatched visual signal when full-spectrum speech was presented; and when the vocal signals contained speech low-pass filtered at 400 Hz. When the visual display was separated into rigid (head only) and non-rigid (face only) motion, the infants looked longer to the visual match in the rigid condition; and to the visual mismatch in the non-rigid condition. Overall, the results suggest 8-month-olds can extract information about the prosodic structure of speech from voice and head kinematics, and are sensitive to their match; and that they are less sensitive to the match between lip and voice information in connected speech.http://europepmc.org/articles/PMC4213016?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Christine Kitamura
Bahia Guellaï
Jeesun Kim
spellingShingle Christine Kitamura
Bahia Guellaï
Jeesun Kim
Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.
PLoS ONE
author_facet Christine Kitamura
Bahia Guellaï
Jeesun Kim
author_sort Christine Kitamura
title Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.
title_short Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.
title_full Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.
title_fullStr Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.
title_full_unstemmed Motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.
title_sort motherese by eye and ear: infants perceive visual prosody in point-line displays of talking heads.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2014-01-01
description Infant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces speaking different ID sentences, and a single vocal-only sentence matched to one of the displays. Infants looked longer to the matched than mismatched visual signal when full-spectrum speech was presented; and when the vocal signals contained speech low-pass filtered at 400 Hz. When the visual display was separated into rigid (head only) and non-rigid (face only) motion, the infants looked longer to the visual match in the rigid condition; and to the visual mismatch in the non-rigid condition. Overall, the results suggest 8-month-olds can extract information about the prosodic structure of speech from voice and head kinematics, and are sensitive to their match; and that they are less sensitive to the match between lip and voice information in connected speech.
url http://europepmc.org/articles/PMC4213016?pdf=render
work_keys_str_mv AT christinekitamura motheresebyeyeandearinfantsperceivevisualprosodyinpointlinedisplaysoftalkingheads
AT bahiaguellai motheresebyeyeandearinfantsperceivevisualprosodyinpointlinedisplaysoftalkingheads
AT jeesunkim motheresebyeyeandearinfantsperceivevisualprosodyinpointlinedisplaysoftalkingheads
_version_ 1724723067326824448