Summary: | 碩士 === 國立中正大學 === 資訊工程研究所 === 89 === In order to get the correct results from a neural network, iteratively learning is needed until every input has been correctly mapped to desired output. The learning procedure takes a long time when the training data set is extremely large. Researchers have worked on many different methods, such as adopting parallel computing mechanism, to shorten the training time. In this thesis, we propose a design of distributed neural computation to shorten the learning time. Our experimental results reveal that the mean square error is close to the sequential backpropagation learning mode, and the learning time is shorter when the data is complex.
In design, the Petri net theory is taken as the base model tool while developing the distributing system to make sure the reliability of the parallel computation environment. In implementation, we choose the CORBA technology as the middleware and the Java programming language as the programming language.
|