Development and VLSI implementation of a new neural net generation method
The author begins with a short introduction to current neural network practices and pitfalls including an in depth discussion of the meaning behind the equations. Specifically, a description of the underlying processes involved is given which likens training to the biological process of cell differe...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | en |
Published: |
Virginia Tech
2014
|
Subjects: | |
Online Access: | http://hdl.handle.net/10919/46092 http://scholar.lib.vt.edu/theses/available/etd-12042009-020129/ |
id |
ndltd-VTETD-oai-vtechworks.lib.vt.edu-10919-46092 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-VTETD-oai-vtechworks.lib.vt.edu-10919-460922021-05-15T05:26:46Z Development and VLSI implementation of a new neural net generation method Bittner, Ray Albert Electrical Engineering Conners, Richard W. Athanas, Peter M. Abbott, A. Lynn LD5655.V855 1993.B588 Integrated circuits -- Very large scale integration Neural computers -- Circuits Neural networks (Computer science) The author begins with a short introduction to current neural network practices and pitfalls including an in depth discussion of the meaning behind the equations. Specifically, a description of the underlying processes involved is given which likens training to the biological process of cell differentiation. Building on these ideas, an improved method of generating integer based binary neural networks is developed. This type of network is particularly useful for the optical character recognition problem, but methods for usage in the more general case are discussed. The new method does not use training as such. Rather, the training data is analyzed to determine the statistically significant relationships therein. These relationships are used to generate a neural network structure that is an idealization of the trained version in that it can accurately extrapolate from existing knowledge by exploiting known relationships in the training data. The paper then turns to the design and testing of a VLSI CMOS chip which was created to utilize the new technique. The chip is based on the MOSIS 2Jlm process using a 2200A x 2200A die that was shaped into a special purpose microprocessor that could be used in any of a number of pattern recognition applications with low power requirements and/or limiting considerations. Simulation results of the methods are then given in which it is shown that error rates of less than 5% for inputs containing up to 30% noise can easily be achieved. Finally, the thesis concludes with ideas on how the various methods described might be improved further. Master of Science 2014-03-14T21:50:55Z 2014-03-14T21:50:55Z 1993-05-17 2009-12-04 2009-12-04 2009-12-04 Thesis Text etd-12042009-020129 http://hdl.handle.net/10919/46092 http://scholar.lib.vt.edu/theses/available/etd-12042009-020129/ en OCLC# 28513810 LD5655.V855_1993.B588.pdf xii, 136 leaves BTD application/pdf application/pdf Virginia Tech |
collection |
NDLTD |
language |
en |
format |
Others
|
sources |
NDLTD |
topic |
LD5655.V855 1993.B588 Integrated circuits -- Very large scale integration Neural computers -- Circuits Neural networks (Computer science) |
spellingShingle |
LD5655.V855 1993.B588 Integrated circuits -- Very large scale integration Neural computers -- Circuits Neural networks (Computer science) Bittner, Ray Albert Development and VLSI implementation of a new neural net generation method |
description |
The author begins with a short introduction to current neural network practices and pitfalls including an in depth discussion of the meaning behind the equations. Specifically, a description of the underlying processes involved is given which likens training to the biological process of cell differentiation. Building on these ideas, an improved method of generating integer based binary neural networks is developed. This type of network is particularly useful for the optical character recognition problem, but methods for usage in the more general case are discussed. The new method does not use training as such. Rather, the training data is analyzed to determine the statistically significant relationships therein. These relationships are used to generate a neural network structure that is an idealization of the trained version in that it can accurately extrapolate from existing knowledge by exploiting known relationships in the training data.
The paper then turns to the design and testing of a VLSI CMOS chip which was created to utilize the new technique. The chip is based on the MOSIS 2Jlm process using a 2200A x 2200A die that was shaped into a special purpose microprocessor that could be used in any of a number of pattern recognition applications with low power requirements and/or limiting considerations. Simulation results of the methods are then given in which it is shown that error rates of less than 5% for inputs containing up to 30% noise can easily be achieved. Finally, the thesis concludes with ideas on how the various methods described might be improved further. === Master of Science |
author2 |
Electrical Engineering |
author_facet |
Electrical Engineering Bittner, Ray Albert |
author |
Bittner, Ray Albert |
author_sort |
Bittner, Ray Albert |
title |
Development and VLSI implementation of a new neural net generation method |
title_short |
Development and VLSI implementation of a new neural net generation method |
title_full |
Development and VLSI implementation of a new neural net generation method |
title_fullStr |
Development and VLSI implementation of a new neural net generation method |
title_full_unstemmed |
Development and VLSI implementation of a new neural net generation method |
title_sort |
development and vlsi implementation of a new neural net generation method |
publisher |
Virginia Tech |
publishDate |
2014 |
url |
http://hdl.handle.net/10919/46092 http://scholar.lib.vt.edu/theses/available/etd-12042009-020129/ |
work_keys_str_mv |
AT bittnerrayalbert developmentandvlsiimplementationofanewneuralnetgenerationmethod |
_version_ |
1719404780025020416 |