Increasing Compactness Of Deep Learning Based Speech Enhancement Models With Parameter Pruning And Quantization Techniques.
碩士 === 國立臺灣大學 === 電子工程學研究所 === 107 === Most recent studies on deep learning based speech enhancement (SE) focused on improving denoising performance. However, successful SE applications require striking a desirable balance between denoising performance and computational cost in real scenarios. In th...
Main Authors: | Jyun-Yi Wu, 吳俊易 |
---|---|
Other Authors: | Shao-Yi Chien |
Format: | Others |
Language: | en_US |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/rjz956 |
Similar Items
-
Accelerating Federated Learning for IoT in Big Data Analytics With Pruning, Quantization and Selective Updating
by: Wenyuan Xu, et al.
Published: (2021-01-01) -
APQ: Joint Search for Network Architecture, Pruning and Quantization Policy
by: Wang, Tianzhe, et al.
Published: (2021) -
Predictive Split Matrix Quantization of Speech LSP Parameters
by: Jr-Ruei Yu, et al.
Published: (2009) -
Speech coding technique based upon LPC vector quantization
by: Ching-Dar Luh, et al.
Published: (1995) -
Speech Spectral Quantizers for Wideband Speech Coding
by: Guibé, G., et al.
Published: (2001)