Summary: | 博士 === 國立臺灣科技大學 === 機械工程系 === 100 === This study proposes an optimization methodology, Prediction Reliability Guided Search of Evolving Network Modeling (PREGSEN), based on soft computing techniques, for design optimization with a limited number of experiments. Response samples from the actual system can be utilized to train a simulated neural network model prior carrying out an optimum search to reduce experimental costs. However, limited sample is status quo for engineering applications due to high experiment costs, which hamper the establishment of an accurate simulated model at once. Small and ill distributed samples will result in the lack of modeling generality for complex problems. In light of the characteristics of neural network model, the prediction accuracy is closely related to the location and the distance between the prediction design and the learning samples. The proposed scheme advocates an evolving network model along the iteration of design optimum. This study sets up fuzzy reasoning to estimate the prediction reliability of the network model. The prediction reliability is introduced to the definition of the fitness function of a genetic algorithm to guide the search for a reliable quasi-optimum of the current simulated model. The verification of the quasi-optimum serves as an additional sample to the previous learning samples to retrain the network model. The model evolution and searching processes iterate until the convergence of optimum. The network modeling improves gradually its precision especially in the most probable space of design optimum and thus enhances the sampling efficiency. Two benchmark numerical examples are presented to illustrate the feasibility and the efficiency compared with conventional iteration of NN and GA. Two engineering examples involving the extrusion blow molding of a gas tank and a bottle demonstrate the merits of the proposed scheme.
In the multimodal optimization, this study used the downhill simplex search method to search the relative quasi-optimum in the simulated neural network model. The verification of the relative quasi-optimum serves as the additional learning samples to the previous learning samples to retrain the neural network model. The model evolution and searching processes iterate until the convergence of the relative optimum. Two-variable of three benchmark numerical examples with multiple relative optimum are presented to illustrate the feasibility and the superiority of the proposed scheme.
|