Energy-efficient speaker identification with low-precision networks
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 91-96). === In this thesis, I demonstrate an approach for text-independent speaker identif...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | English |
Published: |
Massachusetts Institute of Technology
2018
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/119777 |
id |
ndltd-MIT-oai-dspace.mit.edu-1721.1-119777 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-MIT-oai-dspace.mit.edu-1721.1-1197772019-05-02T16:28:33Z Energy-efficient speaker identification with low-precision networks Koppula, Skanda Anantha P. Chandrakasan and James R. Glass. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. Cataloged from PDF version of thesis. Includes bibliographical references (pages 91-96). In this thesis, I demonstrate an approach for text-independent speaker identification, targeting evaluation on low-cost, low-resource FPGAs. In the first half of this work, we contribute a set of of speaker ID models that build on prior existing small-model state-of-art, and reduce bytesize by >85%, with a 3% accuracy change tolerance. We employ model quantization and pruning to achieve this size reduction. To the best of our knowledge, this is the first speaker identification model sized to fit in the on-chip memory of commodity FPGAs, allowing us to reduce power consumption. Our experiments allow us to illustrate the accuracy/memory-footprint trade-off for baseline and compressed speaker identification models. Second, I build an RTL design for efficient evaluation of a subset of our speaker ID models. In particular, I design, implement, and benchmark architectures for low-precision fixed point neural network evaluation and ternary network evaluation. Compared to a baseline full-precision network accelerator with the same timing constraints based on designs from prior work, our low-precision, sparsity-cognizant design decreases LUT/FF resource utilization by 27% and power consumption by 12% in simulation [Chen et al., 2017]. This work has applications to the growing number of speech systems run on consumer devices and data centers. A demonstration video and testing logs illustrating results are available at https: //skoppula. github. io/thesis.html. by Skanda Koppula. M. Eng. 2018-12-18T20:04:13Z 2018-12-18T20:04:13Z 2018 2018 Thesis http://hdl.handle.net/1721.1/119777 1078689815 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 96 pages application/pdf Massachusetts Institute of Technology |
collection |
NDLTD |
language |
English |
format |
Others
|
sources |
NDLTD |
topic |
Electrical Engineering and Computer Science. |
spellingShingle |
Electrical Engineering and Computer Science. Koppula, Skanda Energy-efficient speaker identification with low-precision networks |
description |
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. === Cataloged from PDF version of thesis. === Includes bibliographical references (pages 91-96). === In this thesis, I demonstrate an approach for text-independent speaker identification, targeting evaluation on low-cost, low-resource FPGAs. In the first half of this work, we contribute a set of of speaker ID models that build on prior existing small-model state-of-art, and reduce bytesize by >85%, with a 3% accuracy change tolerance. We employ model quantization and pruning to achieve this size reduction. To the best of our knowledge, this is the first speaker identification model sized to fit in the on-chip memory of commodity FPGAs, allowing us to reduce power consumption. Our experiments allow us to illustrate the accuracy/memory-footprint trade-off for baseline and compressed speaker identification models. Second, I build an RTL design for efficient evaluation of a subset of our speaker ID models. In particular, I design, implement, and benchmark architectures for low-precision fixed point neural network evaluation and ternary network evaluation. Compared to a baseline full-precision network accelerator with the same timing constraints based on designs from prior work, our low-precision, sparsity-cognizant design decreases LUT/FF resource utilization by 27% and power consumption by 12% in simulation [Chen et al., 2017]. This work has applications to the growing number of speech systems run on consumer devices and data centers. A demonstration video and testing logs illustrating results are available at https: //skoppula. github. io/thesis.html. === by Skanda Koppula. === M. Eng. |
author2 |
Anantha P. Chandrakasan and James R. Glass. |
author_facet |
Anantha P. Chandrakasan and James R. Glass. Koppula, Skanda |
author |
Koppula, Skanda |
author_sort |
Koppula, Skanda |
title |
Energy-efficient speaker identification with low-precision networks |
title_short |
Energy-efficient speaker identification with low-precision networks |
title_full |
Energy-efficient speaker identification with low-precision networks |
title_fullStr |
Energy-efficient speaker identification with low-precision networks |
title_full_unstemmed |
Energy-efficient speaker identification with low-precision networks |
title_sort |
energy-efficient speaker identification with low-precision networks |
publisher |
Massachusetts Institute of Technology |
publishDate |
2018 |
url |
http://hdl.handle.net/1721.1/119777 |
work_keys_str_mv |
AT koppulaskanda energyefficientspeakeridentificationwithlowprecisionnetworks |
_version_ |
1719041209797705728 |