Summary: | 碩士 === 國立交通大學 === 電子研究所 === 105 === Brain-inspired computing (neuromorphic computing) is a prospective trend for future computing paradigm. Brain-inspired computation can not only conquer the bottleneck of the conventional von Neumann computation architecture but also has many excellent characteristics, such as low power consumption, excellent fault tolerance, massive parallelism, and dual functions of storage and computation. Today there have been many standout demonstrations in software-based neuromorphic computing, such as IBM Waston, Facebook DeepFace, and Google DeepMind AlphaGo, and also successful research programs in hardware-based neuromorphic computing, such as IBM TrueNorth, Stanford Neurogid, UHEI BrainscaleS, and UoM SpiNNaker. However, these purely CMOS-based approaches may not achieve required computation capability and density in the biological neural network of human brains. Therefore, some people proposed to investigate using RRAM (resistive random-access memory ) as the synaptic device. RRAM is a two-terminal device, with a simple M-I-M (metal-insulator-metal) structure. It has promising scaling potential for high-density applications, and it also shows the ability of fast operation speed and low operation power. Furthermore, in neuromorphic application, its adjustable resistance can be employed to mimic biological synaptic weight change (so-called synaptic plasticity). Also, its crossbar array structure is suitable for parallel computation, which can dramatically accelerate neuromorphic computing. Consequently, using RRAM as the synaptic device is a promising direction of realizing neuromorphic hardware systems.
In this thesis, we implement RRAM synaptic devices in hardware neural networks (HNNs). We investigate both device characteristics and winner-take-all hardware neural system, design the testing flow of HNN, and then demonstrate a binary pattern recognition function. There are five chapters in this thesis, and the main content of the research is described in Chapter 2 to Chapter 4.
In Chapter 2, we construct the HNN testing platform. The basic functional units of HNN are (1) RRAM synapse unit, (2) CMOS neuron unit, and (3) FPGA control interface. Through understanding the functions of these units, we establish the testing and analysis platform by acquiring output signals from an oscilloscope and analyzing signals by Matlab.
In Chapter 3, we introduce the function of winner-take-all hardware neural network. When constructing the RRAM-based HNN, the practical device properties, such as device retention, variation and window size, should be carefully considered. Based on the hardware system and practical device properties, we modify the proposed algorithm, and design hardware description language (verilog) to improve the testing flow by using the concepts of finite state machine (FSM) and parameters propagation.
In Chapter 4, a binary pattern recognition function is demonstrated. We also investigate how the device properties affect the HNN system operation and the final recognition results by analyzing the discrepancy between the experimental results and analytical derivation.
|