Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories
碩士 === 東海大學 === 資訊工程與科學系碩士在職專班 === 94 === In order to widely apply Hebbian-type Associative Memories in every field, the most common way at present time is utilizing VLSI to make Hebbian-type Associative Memories. However, as pattern saving increasing, the number of times between neuron of tradition...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2006
|
Online Access: | http://ndltd.ncl.edu.tw/handle/52333178207743016024 |
id |
ndltd-TW-094THU00392005 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-094THU003920052015-12-18T04:03:45Z http://ndltd.ncl.edu.tw/handle/52333178207743016024 Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories 赫比式關聯記憶體非線性量化之分析與研究 TAI-FENG CHU 朱泰峰 碩士 東海大學 資訊工程與科學系碩士在職專班 94 In order to widely apply Hebbian-type Associative Memories in every field, the most common way at present time is utilizing VLSI to make Hebbian-type Associative Memories. However, as pattern saving increasing, the number of times between neuron of traditional Hebbian-type Associative Memories and interconnection will increase quickly simultaneously. It comes to bottleneck when actually manufacture VLSI. There are two directions to solve this problem. One is to develop high-rank Hebbian-type Associative Memories, the other is to decrease interconnection inside of Hebbian-type Associative Memories. Although high-rank Hebbian-type Associative Memories may add to save patterns, yet, the connection inside will inevitably increase rapidly. Therefore, how to decrease interconnection is the radical solution to settle this problem once for all. Making use of quantization strategy to decrease interconnection is quite an efficient way. Chung & Tsai has focused on connection numerical quantization of Hebbian-type Associative Memories to make analysis. They found Hebbian-type Associative Memories of fairly good contractility after quantization. The strategy they applied are two-level, three-level and linear quantization. One important characteristics of Hebbian-type Associative Memories is its interconnection value owns Gauss scattering distinction. It will enhance performance of Hebbian-type Associative Memories if making use of its distinction as the strategy in applying non-linear quantization. In this research, we introduce one order and quadratic Hebbian-type Associative Memories, derive equation of probability of direct convergence from linear quantization strategy. The key-point of this research is non-linear quantization strategy is integrated by Gauss possibility density function, then, divide area according to requirements and equal every segmentation area to seek the length that every segmentation possesses on X-axis, next, calculate up & down limit value based on proportion of length of every segmentation area to the whole area. Afterwards, carry this value back to the equation of probability of direct convergence after original linear quantization and become equation of probability of direct convergence after non-linear quantization of Hebbian-type Associative Memories. Hence, We may compare which one is superior by investigating between linear & non-linear equation of probability of direct convergence Comparing experiment results between linear & non-linear quantization equation of probability of direct convergence, we may clearly observe the performance of probability of convergence in non-linear quantization strategy is far more superior than in linear quantization strategy. Therefore, when producing tip, non-linear quantization strategy owns more practically merits in Hebbian-type Associative Memories. Ching-Tsorng Tsai 蔡清欉 2006 學位論文 ; thesis 49 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 東海大學 === 資訊工程與科學系碩士在職專班 === 94 === In order to widely apply Hebbian-type Associative Memories in every field, the most common way at present time is utilizing VLSI to make Hebbian-type Associative Memories. However, as pattern saving increasing, the number of times between neuron of traditional Hebbian-type Associative Memories and interconnection will increase quickly simultaneously. It comes to bottleneck when actually manufacture VLSI. There are two directions to solve this problem. One is to develop high-rank Hebbian-type Associative Memories, the other is to decrease interconnection inside of Hebbian-type Associative Memories. Although high-rank Hebbian-type Associative Memories may add to save patterns, yet, the connection inside will inevitably increase rapidly. Therefore, how to decrease interconnection is the radical solution to settle this problem once for all.
Making use of quantization strategy to decrease interconnection is quite an efficient way. Chung & Tsai has focused on connection numerical quantization of Hebbian-type Associative Memories to make analysis. They found Hebbian-type Associative Memories of fairly good contractility after quantization. The strategy they applied are two-level, three-level and linear quantization. One important characteristics of Hebbian-type Associative Memories is its interconnection value owns Gauss scattering distinction. It will enhance performance of Hebbian-type Associative Memories if making use of its distinction as the strategy in applying non-linear quantization.
In this research, we introduce one order and quadratic Hebbian-type Associative Memories, derive equation of probability of direct convergence from linear quantization strategy. The key-point of this research is non-linear quantization strategy is integrated by Gauss possibility density function, then, divide area according to requirements and equal every segmentation area to seek the length that every segmentation possesses on X-axis, next, calculate up & down limit value based on proportion of length of every segmentation area to the whole area. Afterwards, carry this value back to the equation of probability of direct convergence after original linear quantization and become equation of probability of direct convergence after non-linear quantization of Hebbian-type Associative Memories. Hence, We may compare which one is superior by investigating between linear & non-linear equation of probability of direct convergence
Comparing experiment results between linear & non-linear quantization equation of probability of direct convergence, we may clearly observe the performance of probability of convergence in non-linear quantization strategy is far more superior than in linear quantization strategy. Therefore, when producing tip, non-linear quantization strategy owns more practically merits in Hebbian-type Associative Memories.
|
author2 |
Ching-Tsorng Tsai |
author_facet |
Ching-Tsorng Tsai TAI-FENG CHU 朱泰峰 |
author |
TAI-FENG CHU 朱泰峰 |
spellingShingle |
TAI-FENG CHU 朱泰峰 Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories |
author_sort |
TAI-FENG CHU |
title |
Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories |
title_short |
Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories |
title_full |
Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories |
title_fullStr |
Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories |
title_full_unstemmed |
Analysis and Research of Using Non-linear Quantization Strategy in Hebbian-type Associative Memories |
title_sort |
analysis and research of using non-linear quantization strategy in hebbian-type associative memories |
publishDate |
2006 |
url |
http://ndltd.ncl.edu.tw/handle/52333178207743016024 |
work_keys_str_mv |
AT taifengchu analysisandresearchofusingnonlinearquantizationstrategyinhebbiantypeassociativememories AT zhūtàifēng analysisandresearchofusingnonlinearquantizationstrategyinhebbiantypeassociativememories AT taifengchu hèbǐshìguānliánjìyìtǐfēixiànxìngliànghuàzhīfēnxīyǔyánjiū AT zhūtàifēng hèbǐshìguānliánjìyìtǐfēixiànxìngliànghuàzhīfēnxīyǔyánjiū |
_version_ |
1718153522848989184 |