Energy-efficient speaker identification with low-precision networks

Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 91-96). === In this thesis, I demonstrate an approach for text-independent speaker identif...

Full description

Bibliographic Details
Main Author: Koppula, Skanda
Other Authors: Anantha P. Chandrakasan and James R. Glass.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2018
Subjects:
Online Access:http://hdl.handle.net/1721.1/119777
Description
Summary:Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 91-96). === In this thesis, I demonstrate an approach for text-independent speaker identification, targeting evaluation on low-cost, low-resource FPGAs. In the first half of this work, we contribute a set of of speaker ID models that build on prior existing small-model state-of-art, and reduce bytesize by >85%, with a 3% accuracy change tolerance. We employ model quantization and pruning to achieve this size reduction. To the best of our knowledge, this is the first speaker identification model sized to fit in the on-chip memory of commodity FPGAs, allowing us to reduce power consumption. Our experiments allow us to illustrate the accuracy/memory-footprint trade-off for baseline and compressed speaker identification models. Second, I build an RTL design for efficient evaluation of a subset of our speaker ID models. In particular, I design, implement, and benchmark architectures for low-precision fixed point neural network evaluation and ternary network evaluation. Compared to a baseline full-precision network accelerator with the same timing constraints based on designs from prior work, our low-precision, sparsity-cognizant design decreases LUT/FF resource utilization by 27% and power consumption by 12% in simulation [Chen et al., 2017]. This work has applications to the growing number of speech systems run on consumer devices and data centers. A demonstration video and testing logs illustrating results are available at https: //skoppula. github. io/thesis.html. === by Skanda Koppula. === M. Eng.