Effects of speechreading and signal-to-noise ratio on understanding mainstream American English by American and Indian adults.

The purpose of this study was to measure effects of speechreading and signal-to-noise ratio (SNR) on understanding mainstream American English (MAE) heard by 30 Indian adults compared to 30 American adults. Participants listened to a recording of a female speaker of MAE saying 10 lists of 10 differe...

Full description

Bibliographic Details
Main Author: Kanekama, Yori
Other Authors: Downs, David
Format: Others
Language:en_US
Published: Wichita State University 2010
Online Access:http://hdl.handle.net/10057/2369
Description
Summary:The purpose of this study was to measure effects of speechreading and signal-to-noise ratio (SNR) on understanding mainstream American English (MAE) heard by 30 Indian adults compared to 30 American adults. Participants listened to a recording of a female speaker of MAE saying 10 lists of 10 different Everyday Speech Sentences per list. Participants heard sentences from a TV loudspeaker at a conversational speech level while a four-talker babble played through two surrounding loudspeakers at a +6, 0, -6, -12, or -18 dB SNR. Participants heard and watched a different list of sentences at each SNR (i.e., through the Auditory-Visual modality) and only heard a different list of sentences at each SNR (i.e., through an Auditory modality). After listening to each sentence, participants wrote verbatim what they thought the speaker said. Each participant’s speechreading performance at each SNR was computed as the difference in words correctly heard through Auditory-Visual versus Auditory modalities. Consistent with most previous research, American participants benefitted significantly more from speechreading at poorer SNRs than at favorable SNRs. The novel finding of this study, however, was that Indian participants benefitted less from speechreading than American participants at poorer SNRs, but benefitted more from speechreading than American participants at favorable SNRs. Linguistic (and, possibly, nonlinguistic) variables may have accounted for these findings; including an increased need for Indian participants to integrate more auditory cues with visual cues to benefit from speechreading, presumably because they only spoke English as a second language. These findings have theoretical implications for understanding the role of auditory-visual integration on cross-language perception of speech, and practical implications for understanding how much speechreading helps people understand a second language in noisy environments. === Thesis (Ph.D.)--Wichita State University, College of Health Professions, Dept. of Communication Sciences and Disorders