Top-Down Processes in Simulated Combined Electric-Acoustic Hearing: The Effect of Context and the Role of Low-Frequency Cues in the Perception of Temporally Interrupted Speech
In recent years, the number of unilateral cochlear implant (CI) users with functional residual-hearing has increased and bimodal hearing has become more prevalent. According to the multi-source speech perception model, both bottom-up and top-down processes are important components of speech percepti...
Main Author: | |
---|---|
Format: | Others |
Published: |
Scholar Commons
2014
|
Subjects: | |
Online Access: | https://scholarcommons.usf.edu/etd/5379 https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=6578&context=etd |
Summary: | In recent years, the number of unilateral cochlear implant (CI) users with functional residual-hearing has increased and bimodal hearing has become more prevalent. According to the multi-source speech perception model, both bottom-up and top-down processes are important components of speech perception in bimodal hearing. Additionally, these two components are thought to interact with each other to different degrees depending on the nature of the speech materials and the quality of the bottom-up cues. Previous studies have documented the benefits of bimodal hearing as compared with a CI alone, but most of them have focused on the importance of bottom-up, low-frequency cues. Because only a few studies have investigated top-down processing in bimodal hearing, relatively little is known about the top-down mechanisms that contribute to bimodal benefit, or the interactions that may occur between bottom-up and top-down processes during bimodal speech perception.
The research described in this dissertation investigated top-down processes of bimodal hearing, and potential interactions between top-down and bottom-up processes, in the perception of temporally interrupted speech. Temporally interrupted sentences were used to assess listeners' ability to fill in missing segments of speech by using top-down processing. Young normal hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Sentences were square-wave gated at a rate of 5 Hz with a 50 percent duty cycle. Two factors that were expected to influence bimodal benefit were examined: the amount of linguistic context available in the speech stimuli, and the continuity of low-frequency cues.
Experiment 1 evaluated the effect of sentence context on bimodal benefit for temporally interrupted sentences from the City University of New York (CUNY) and Institute of Electrical and Electronics and Engineers (IEEE) sentence corpuses. It was hypothesized that acoustic low-frequency information would facilitate linguistic top-down processing such that the higher context CUNY sentences would produce more bimodal benefit than the lower context IEEE sentences. Three vocoder channel conditions were tested for each type of sentence (8-, 12-, and 16-channels for CUNY; 12-, 16-, and 32-channels for IEEE), in combination with either LP speech or LPHCs. Bimodal benefit was compared for similar amounts of spectral degradation (matched-channels) and similar ranges of baseline performance. Two gain measures, percentage point gain and normalized gain, were examined.
Experiment 1 revealed clear effects of context on bimodal benefit for temporally interrupted speech, when LP speech was presented to the residual-hearing ear, thereby providing additional support for the notion that low-frequency cues can enhance listeners' use of top-down processing. However, the bimodal benefits observed for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. In addition, unlike previous findings for continuous speech, no bimodal benefits were observed when LPHCs were presented to the LP ear.
Experiments 2 and 3 further investigated the effects of low-frequency cues on bimodal benefit by systematically restoring continuity to temporally interrupted signals in the vocoder and/or LP ears. Stimuli were 12-channel CUNY sentences presented to the vocoder ear, and LPHCs presented to the LP ear. Signal continuity was restored to the vocoder ear by filling silent gaps in sentences with envelope-modulated, speech-shaped noise. Continuity was restored to signals in the LP ear by filling gaps with envelope-modulated LP noise or by using continuous LPHCs. It was hypothesized that the restoration of continuity in one or both ears would improve bimodal benefit relative to the condition in which both ears received temporally interrupted stimuli.
The results from Experiments 2 and 3 showed that restoring continuity to the simulated residual-hearing or CI ear improved bimodal benefits, but that the greatest improvement was observed when continuity was restored to both ears. These findings support the conclusion that temporal interruption disrupts top-down enhancement effects in bimodal hearing. Lexical segmentation and perceptual continuity were identified as factors that could potentially explain the increased bimodal benefit for continuous, as compared to temporally interrupted, speech.
Taken together, the findings from Experiments 1-3 provide additional evidence that low-frequency sensory information can provide bimodal benefit for speech that is spectrally and/or temporally degraded by improving listeners' ability to make use of top-down processing. Findings further suggest that temporal degradation reduces top-down enhancement effects in bimodal hearing, thereby reducing bimodal benefit for temporally interrupted speech as compared to continuous speech. |
---|