On the Design of Distributed Neural Computation Based on Petri Net Scheduling and CORBA Technology

碩士 === 國立中正大學 === 資訊工程研究所 === 89 === In order to get the correct results from a neural network, iteratively learning is needed until every input has been correctly mapped to desired output. The learning procedure takes a long time when the training data set is extremely large. Researchers...

Full description

Bibliographic Details
Main Authors: Yuan-Hou Chang, 張原豪
Other Authors: Pao-Ta Yu
Format: Others
Language:en_US
Published: 2001
Online Access:http://ndltd.ncl.edu.tw/handle/72027812775519450497
Description
Summary:碩士 === 國立中正大學 === 資訊工程研究所 === 89 === In order to get the correct results from a neural network, iteratively learning is needed until every input has been correctly mapped to desired output. The learning procedure takes a long time when the training data set is extremely large. Researchers have worked on many different methods, such as adopting parallel computing mechanism, to shorten the training time. In this thesis, we propose a design of distributed neural computation to shorten the learning time. Our experimental results reveal that the mean square error is close to the sequential backpropagation learning mode, and the learning time is shorter when the data is complex. In design, the Petri net theory is taken as the base model tool while developing the distributing system to make sure the reliability of the parallel computation environment. In implementation, we choose the CORBA technology as the middleware and the Java programming language as the programming language.