Summary: | 碩士 === 國立交通大學 === 控制工程系 === 82 === Neural network has become a very active area of research. Most
researches are interested in the learning ability of neural
network. Learning of neural network is specified by learning
algorithm. Many learning algorithms have been developed. Most
of them are based on the gradient descent method which
exploited the derivatives of the error function. Therefore,
they can not always find the global optimum in the case of a
multi-modal error function. They sometimes fall into a local
minimum of the error function. However, the random optimization
method does not use the derivatives of the error function.
Hence the global optimum can be found by the random
optimization method. The main objective of this thesis is to
apply random search techniques to various actual neural
networks which are multi- modal. We improve the performance of
the neural network using the common learning algorithm by
utilizing random search techniques. Finally we compare the
random search techniques to the conventional technique (e.g.
back-propagation) in global optimization. In this thesis we
investigated the ability of optimization of various methods
(including back-propagation and random search techniques).
First we briefly reviewed several random search techniques. In
addition, simulation results indicate that random search
techniques can be used to solve multi-modal optimization
problem (e.g. function approximation and patterns
classification).
|