Summary: | 碩士 === 國立交通大學 === 電子研究所 === 106 === Riding the waves of technology innovation, the development of contemporary computers shows no sign of slowing down. However, the von-Neumann architecture in computers where the external memory and arithmetic processor are separated inevitably renders inefficiency in computing massive data and consumes more power. In contrast, human brain consumes more than three orders of magnitudes (~0.01W/cm2) less power than contemporary computers (>10W/cm2) because of the highly parallel and distributed computation.
Therefore, inspired by the human brain, many neural networks have been developed for mimicking the sensory functions such as image, text, and voice recognition. On the other hand, in order to realize hardware-based neural networks (HNNs), resistive switching memory (RRAM)-based synaptic device has been widely investigated where the device conductance successfully depicts the connecting strength between neurons, the so-called synaptic weight. Moreover, many synaptic plasticity functions including long-term potentiation (LTP), long-tern depression (LTD), and spike-timing-dependent plasticity (STDP) have been demonstrated in these resistive synaptic devices.
In my thesis, a printed circuit board-based hardware neural network platform was successfully established. The platform could initialize a large number of resistive synaptic device automatically, thus greatly reducing the time required for device screening. Switch matrix were used to decrease the number of FPGA input signal and allowed recognition of higher resolution patterns from 9 to 100 pixels. This fully parallel HNN emulator was used to perform recognition tasks of three different training patterns based on a back-propagation learning rule, including letters(L, T, V), numbers(4, 6, 9) and space invader figures. Even with the presence of noise in testing patterns with one or two error bits, the emulator showed perfect accuracy, showing excellent tolerance against input noise.
|