Summary: | 碩士 === 國立臺灣大學 === 資訊工程學系研究所 === 85 === Neural networks simulate the neural system of human beings to solve many
problems about recognition, classification, and mapping. One of them is the
wildly utilized associative memory. It memorizes patterns through learning
process, and hence possesses error tolerance and recognition ability. The
fully-connected network is a common architecture for associative memory. Many
models and algorithm, such as Hopfield model and error-correction rule, have
been developed to improve its accuracy, efficiency, and capacity.
The most essential issue of associative memory is its error tolerance,
and other problems like limit cycles and spurious stable states are also
important. In this paper we examine the fully-connected associative memory
geometrically and find a general operating mechenism of conventional training
algorithms. According to this mechenism we devise several learning methods
from different point of view, such as geometric method, algebraic method,
and derivative memthod, to improve the performance of associative memory.
By enlarging the basins of attraction we obtain better error tolerance and
fewer limit cycles. These results will be shown by lots of simulations.
We apply these methods to different types of associative memories, like
auto-associative memory and temporal associative memory, and achieve
excellent performance in all of them. Also we develop another architecture
named reduced feedforward associative memory to reduce the complexity, and
this structure owns the similar nice ability.
|