Summary: | 碩士 === 義守大學 === 電機工程學系 === 91 === As we know, back-propagation (BP) learning algorithm is the most popular learning rule adopted by neural network applications. Basically, BP learning rule is way derived by an iterative gradient procedure. The proper weights of neural network is conducted by (1) computing the error of network output, and (2) feeding back this error level-by-level to the inputs, changing the weights in such a way to modified them in proportion to the error. However, the slow convergent speed and the plunge of local minimum are two main problems of gradient learning method. How to speed up the learning time and help the network escaping from the local minimum becomes an important work for neural network studies.
In this research, for improving the shortcomings of BP learning rule, a modified Gram-Schmidt (MGS) learning algorithm is investigated and developed. For simplifying the structure of neural network, the Sigma-Pi network is used in our studies. In MGS learning algorithm, the iterative gradient procedure and direct matrix computing method based on Gram-Schmidt process are both adopted for finding the proper weights of network while it is training. For demonstrating the learning rule we developed, several experiments are implemented to evidence the superiority and feasibility.
|