Precompensated Analog Berger Codes for Neuromorphic RRAM in Neural Network

碩士 === 國立彰化師範大學 === 電子工程學系 === 108 === Nowadays, in the era of rapid development of artificial intelligence, neural network plays a very important key in artificial intelligence. However, artificial intelligence requires a large amount of data calculation. Therefore, in order to accelerate the compu...

Full description

Bibliographic Details
Main Authors: Chang,Yu-Teng, 張宇騰
Other Authors: Tsung-Chu Huang
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/2vg947
Description
Summary:碩士 === 國立彰化師範大學 === 電子工程學系 === 108 === Nowadays, in the era of rapid development of artificial intelligence, neural network plays a very important key in artificial intelligence. However, artificial intelligence requires a large amount of data calculation. Therefore, in order to accelerate the computational speed of artificial intelligence, a neural type resistance is proposed. Memory is used as a method to accelerate neural network operations. However, neural type resistive memory performs memory multiplication and accumulation operations in a graphics processing unit (GPU). However, resistive memory is subject to some read, write, and transfer processes. The error caused by the error and the defects in the process, so in this paper, the error detection and pre-compensation for the neural type resistive memory are proposed in advance through the analogy of the Berger code, the neural network and the error rebound algorithm. The resistance is reversed and will be corrected by the neurological resistive memory of the error. In the literature, the Analog Code has the ability to be fault-tolerant. In this paper, the analogy code is proposed, and the Berg code, which is usually used in the asymmetric error system for error detection, is analog-coded, and then further. We extend the analogy of the Berger code into a two-dimensional architecture. The two-dimensional architecture can be used to locate a single error. The error detected by the analogy of the Berger code is first used by the proposed error rebound algorithm. After the analogy of the checksum value read by the analogy code and the codeword after the error is added, the average error of the recalculation sum is subtracted back into each cell to achieve the effect of false bounce, and then the neural network is reused. When the road itself propagates forward, the input multiplied by the weight is added as a read check word for the average gradient calculation, and when the backward spread is initialized or updated to the weight, so that the neural network itself has self-healing. Ability to reduce neural network training time and improve accuracy, first correct the code and then re-write after learning and training through the proposed neural network Going back to RRAM to make corrections, the process proposed in this paper can reduce the time of training and learning of neural networks and improve the accuracy of learning. In the paper, through the error rate simulation analysis, we simulate the analysis through the block error rate. In this paper, the errors caused by all errors are divided into local error and regional error. The local error is hypothesis. Simulating the local error, the regional error is the noise that the analog is subjected to in the channel, so we simulated the local error and the regional error individually and in combination, and according to the simulation results, it is obtained from the simulation result of the block error rate. In the AWGN environment, after the mean=0, the MTTF is about 100 times larger than the uncoded area, and the simulated asymmetric AWGN mean=0.125 is improved by more than about 150 times. When the local error occurs, the MTTF can still be used. The improvement is 10 times, which proves that the proposed method can effectively compensate for the large error rate caused by the degradation, and can also achieve the self-repair function through the healing talent of the neural network itself.