Cortical encoding of acoustic and linguistic rhythms in spoken narratives

Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, t...

Full description

Bibliographic Details
Main Authors: Cheng Luo, Nai Ding
Format: Article
Language:English
Published: eLife Sciences Publications Ltd 2020-12-01
Series:eLife
Subjects:
Online Access:https://elifesciences.org/articles/60433
id doaj-1456e654473d4b739d3df64cbdb48e71
record_format Article
spelling doaj-1456e654473d4b739d3df64cbdb48e712021-05-05T21:52:51ZengeLife Sciences Publications LtdeLife2050-084X2020-12-01910.7554/eLife.60433Cortical encoding of acoustic and linguistic rhythms in spoken narrativesCheng Luo0Nai Ding1https://orcid.org/0000-0003-3428-2723Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, ChinaKey Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China; Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou, ChinaSpeech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.https://elifesciences.org/articles/60433speech envelopelanguageattentionrhythmfrequency taggingspoken narratives
collection DOAJ
language English
format Article
sources DOAJ
author Cheng Luo
Nai Ding
spellingShingle Cheng Luo
Nai Ding
Cortical encoding of acoustic and linguistic rhythms in spoken narratives
eLife
speech envelope
language
attention
rhythm
frequency tagging
spoken narratives
author_facet Cheng Luo
Nai Ding
author_sort Cheng Luo
title Cortical encoding of acoustic and linguistic rhythms in spoken narratives
title_short Cortical encoding of acoustic and linguistic rhythms in spoken narratives
title_full Cortical encoding of acoustic and linguistic rhythms in spoken narratives
title_fullStr Cortical encoding of acoustic and linguistic rhythms in spoken narratives
title_full_unstemmed Cortical encoding of acoustic and linguistic rhythms in spoken narratives
title_sort cortical encoding of acoustic and linguistic rhythms in spoken narratives
publisher eLife Sciences Publications Ltd
series eLife
issn 2050-084X
publishDate 2020-12-01
description Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.
topic speech envelope
language
attention
rhythm
frequency tagging
spoken narratives
url https://elifesciences.org/articles/60433
work_keys_str_mv AT chengluo corticalencodingofacousticandlinguisticrhythmsinspokennarratives
AT naiding corticalencodingofacousticandlinguisticrhythmsinspokennarratives
_version_ 1721457719720804352