Diphone-based speech recognition using neural networks

Speaker-independent automatic speech recognition (ASR) is a problem of long-standing interest to the Department of Defense. Unfortunately, existing systems are still too limited in capability for many military purposes. Most large-vocabulary systems use phonemes (individual speech sounds, including...

Full description

Bibliographic Details
Main Author: Cantrell, Mark E.
Other Authors: Robert B. McGhee
Language:en_US
Published: Monterey, California. Naval Postgraduate School 2013
Online Access:http://hdl.handle.net/10945/32063
Description
Summary:Speaker-independent automatic speech recognition (ASR) is a problem of long-standing interest to the Department of Defense. Unfortunately, existing systems are still too limited in capability for many military purposes. Most large-vocabulary systems use phonemes (individual speech sounds, including vowels and consonants) as recognition units. This research explores the use of diphones (pairings of phonemes) as recognition units. Diphones are acoustically easier to recognize because coarticulation effects between the diphones's phonemes become recognition features, rather than confounding variables as in phoneme recognition. Also, diphones carry more information than phonemes, giving the lexical analyzer two chances to detect every phoneme in the word. Research results confirm these theoretical advantages. In testing with 4490 speech samples from 163 speakers, 70.2% of 157 test diphones were correctly identified by one trained neural network. In the same tests, the correct diphone was one of the top three outputs 89.0% of the time. During word recognition tests, the correct word was detected 85% of the time in continuous speech. Of those detections, the correct diphone was ranked first 41.6% of the time and among the top six 74% of the time. In addition, new methods of pitch-based frequency normalization and network feedback-based time alignment are introduced. Both of these techniques improved recognition accuracy on male and female speech samples from all eight dialect regions in the U.S. In one test set, frequency normalization reduced errors by 34%. Similarly, feedback-based time alignment reduced another network's test set errors from 32.8% to 11.0%.