A bit-level systolic array for matrix multiplication and its application to autoassociative memory
碩士 === 國立成功大學 === 電機工程研究所 === 83 === In this thesis, a bit-level systolic array by two level pipelining method is proposed to implement the fast algorithm of matrix multiplication. After studting various current algorithms , in order to im...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
1995
|
Online Access: | http://ndltd.ncl.edu.tw/handle/06369411727618208389 |
id |
ndltd-TW-083NCKU0442100 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-083NCKU04421002015-10-13T12:53:36Z http://ndltd.ncl.edu.tw/handle/06369411727618208389 A bit-level systolic array for matrix multiplication and its application to autoassociative memory 一個位元級心脈式矩陣相乘陣列及其在自動關聯記憶體上的應用 Gui-Bin Hsieh 謝貴彬 碩士 國立成功大學 電機工程研究所 83 In this thesis, a bit-level systolic array by two level pipelining method is proposed to implement the fast algorithm of matrix multiplication. After studting various current algorithms , in order to improve the efficiency and computation speed of every processor, firstly, we utilize the torus array of the standard multiplication algorithm in the word-level pipelining, then make a bit-level pipelining inside each processor. Because the number of processor is dependent on the dimension of the matrix and the number of bits of entry, we design two kinds of architectures for different conditions to reduce the complexity of the hardware. One is adapted to the "larger dimension" condition, the other is adapted to the "larger number of bits" condition. Finally, the above methods are applied in the field of neural network, and a bit-level systolic array for autoassociative memory is designed. Chi-Wu Mao, Chin-Hsing Chen 毛齊武, 陳進興 1995 學位論文 ; thesis 100 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立成功大學 === 電機工程研究所 === 83 === In this thesis, a bit-level systolic array by two level
pipelining method is proposed to implement the fast algorithm
of matrix multiplication. After studting various current
algorithms , in order to improve the efficiency and computation
speed of every processor, firstly, we utilize the torus array
of the standard multiplication algorithm in the word-level
pipelining, then make a bit-level pipelining inside each
processor. Because the number of processor is dependent on the
dimension of the matrix and the number of bits of entry, we
design two kinds of architectures for different conditions to
reduce the complexity of the hardware. One is adapted to the
"larger dimension" condition, the other is adapted to the
"larger number of bits" condition. Finally, the above methods
are applied in the field of neural network, and a bit-level
systolic array for autoassociative memory is designed.
|
author2 |
Chi-Wu Mao, Chin-Hsing Chen |
author_facet |
Chi-Wu Mao, Chin-Hsing Chen Gui-Bin Hsieh 謝貴彬 |
author |
Gui-Bin Hsieh 謝貴彬 |
spellingShingle |
Gui-Bin Hsieh 謝貴彬 A bit-level systolic array for matrix multiplication and its application to autoassociative memory |
author_sort |
Gui-Bin Hsieh |
title |
A bit-level systolic array for matrix multiplication and its application to autoassociative memory |
title_short |
A bit-level systolic array for matrix multiplication and its application to autoassociative memory |
title_full |
A bit-level systolic array for matrix multiplication and its application to autoassociative memory |
title_fullStr |
A bit-level systolic array for matrix multiplication and its application to autoassociative memory |
title_full_unstemmed |
A bit-level systolic array for matrix multiplication and its application to autoassociative memory |
title_sort |
bit-level systolic array for matrix multiplication and its application to autoassociative memory |
publishDate |
1995 |
url |
http://ndltd.ncl.edu.tw/handle/06369411727618208389 |
work_keys_str_mv |
AT guibinhsieh abitlevelsystolicarrayformatrixmultiplicationanditsapplicationtoautoassociativememory AT xièguìbīn abitlevelsystolicarrayformatrixmultiplicationanditsapplicationtoautoassociativememory AT guibinhsieh yīgèwèiyuánjíxīnmàishìjǔzhènxiāngchéngzhènlièjíqízàizìdòngguānliánjìyìtǐshàngdeyīngyòng AT xièguìbīn yīgèwèiyuánjíxīnmàishìjǔzhènxiāngchéngzhènlièjíqízàizìdòngguānliánjìyìtǐshàngdeyīngyòng AT guibinhsieh bitlevelsystolicarrayformatrixmultiplicationanditsapplicationtoautoassociativememory AT xièguìbīn bitlevelsystolicarrayformatrixmultiplicationanditsapplicationtoautoassociativememory |
_version_ |
1716868357609750528 |