The Implementations for Fixed-Point and Floating-Point Recurrent Neural Networks

碩士 === 亞洲大學 === 光電與通訊學系 === 107 === In this thesis, the research on the learning performances of fixed-point and float-ing-point implementations in single-layer and double-layer recurrent neural net-works is proposed. The recursive neural network is in fact the combination of feed-forward neural net...

Full description

Bibliographic Details
Main Authors: LIN, YU-CHING, 林玉青
Other Authors: KO, HSIEN-JU
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/9yf8k2
Description
Summary:碩士 === 亞洲大學 === 光電與通訊學系 === 107 === In this thesis, the research on the learning performances of fixed-point and float-ing-point implementations in single-layer and double-layer recurrent neural net-works is proposed. The recursive neural network is in fact the combination of feed-forward neural networks and infinite impulse response (IIR) filters. We have devel-oped the optimized filter structure and investigated its learning behaviors in finite-precision digital devices. In this paper, we test the robustness of fixed-point numbers and floating-point numbers with different finite precisions, and optimize the effect of finite precision on the state-space structure, so that the sensitivity of system pa-rameters with finite-precision can be effectively reduced in shorter word length. Once the optimal structure is synthesized, the RNN system can be stabilized with shorter word-length. Then, the performance of the finite precision of the single-layer and double-layer recurrent neural networks is compared. The results show that the fixed-point number before the optimization is slightly worse than the floating-point number. After optimization, the double-layer RNN has better learning performance than the single-layer RNN under finite precision. Finally, we verify its effectiveness by numerical examples.