The Implementations for High Computa-tion-Efficient Recurrent Neural Networks
碩士 === 亞洲大學 === 光電與通訊學系碩士在職專班 === 106 === In this thesis, a high computational efficiency recurrent neural network (RNN) with state-space realizations is proposed. The proposed RNN is local feedback. We con-sider that the RNN is implemented in fixed-point digital devices. The high compu-tational eff...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2018
|
Online Access: | http://ndltd.ncl.edu.tw/handle/e55464 |
id |
ndltd-TW-106THMU1652003 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-106THMU16520032019-05-16T00:30:15Z http://ndltd.ncl.edu.tw/handle/e55464 The Implementations for High Computa-tion-Efficient Recurrent Neural Networks 具高計算效率之遞迴神經網路系統實現 CHENG, GUAN-YING 程冠穎 碩士 亞洲大學 光電與通訊學系碩士在職專班 106 In this thesis, a high computational efficiency recurrent neural network (RNN) with state-space realizations is proposed. The proposed RNN is local feedback. We con-sider that the RNN is implemented in fixed-point digital devices. The high compu-tational efficiency state-space is synthesized based on pole sensitivity measure minimization. In contrast to the conventional optimal structures, the proposed structure requires only 4n+1 multiplications in every sample time rather than (n+1)2 multiplications in the conventional ones, where n is the order of the state-space systems. By using back propagation learning algorithm, the proposed structure is with similar performances comparing with the conventional optimal structures, but can significantly be with lower computational burden. Finally, numerical examples are performed to illustrate the effectiveness of the proposed approach. Ko, Hsien-Ju 柯賢儒 2018 學位論文 ; thesis 37 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 亞洲大學 === 光電與通訊學系碩士在職專班 === 106 === In this thesis, a high computational efficiency recurrent neural network (RNN) with state-space realizations is proposed. The proposed RNN is local feedback. We con-sider that the RNN is implemented in fixed-point digital devices. The high compu-tational efficiency state-space is synthesized based on pole sensitivity measure minimization. In contrast to the conventional optimal structures, the proposed structure requires only 4n+1 multiplications in every sample time rather than (n+1)2 multiplications in the conventional ones, where n is the order of the state-space systems. By using back propagation learning algorithm, the proposed structure is with similar performances comparing with the conventional optimal structures, but can significantly be with lower computational burden. Finally, numerical examples are performed to illustrate the effectiveness of the proposed approach.
|
author2 |
Ko, Hsien-Ju |
author_facet |
Ko, Hsien-Ju CHENG, GUAN-YING 程冠穎 |
author |
CHENG, GUAN-YING 程冠穎 |
spellingShingle |
CHENG, GUAN-YING 程冠穎 The Implementations for High Computa-tion-Efficient Recurrent Neural Networks |
author_sort |
CHENG, GUAN-YING |
title |
The Implementations for High Computa-tion-Efficient Recurrent Neural Networks |
title_short |
The Implementations for High Computa-tion-Efficient Recurrent Neural Networks |
title_full |
The Implementations for High Computa-tion-Efficient Recurrent Neural Networks |
title_fullStr |
The Implementations for High Computa-tion-Efficient Recurrent Neural Networks |
title_full_unstemmed |
The Implementations for High Computa-tion-Efficient Recurrent Neural Networks |
title_sort |
implementations for high computa-tion-efficient recurrent neural networks |
publishDate |
2018 |
url |
http://ndltd.ncl.edu.tw/handle/e55464 |
work_keys_str_mv |
AT chengguanying theimplementationsforhighcomputationefficientrecurrentneuralnetworks AT chéngguānyǐng theimplementationsforhighcomputationefficientrecurrentneuralnetworks AT chengguanying jùgāojìsuànxiàolǜzhīdìhuíshénjīngwǎnglùxìtǒngshíxiàn AT chéngguānyǐng jùgāojìsuànxiàolǜzhīdìhuíshénjīngwǎnglùxìtǒngshíxiàn AT chengguanying implementationsforhighcomputationefficientrecurrentneuralnetworks AT chéngguānyǐng implementationsforhighcomputationefficientrecurrentneuralnetworks |
_version_ |
1719167491964403712 |