Summary: | 碩士 === 朝陽科技大學 === 資訊工程系碩士班 === 93 === In this thesis, supervised and reinforcement evolution learning methods are proposed for recurrent wavelet neuro-fuzzy networks (RWNFN). The RWNFN model is a feedforward multi-layer network which integrates traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). The recurrent property comes from feeding the internal variables, derived from membership function matched degree, back to itself. In the learning algorithm, this thesis proposed supervised and reinforcement evolution learning methods. The supervised evolution learning methods consist of the dynamic symbiotic evolution (DSE) and the self-constructing evolution algorithm (SCEA). In the DSE, the better chromosomes will be initially generated while the better mutation points will be determined for performing dynamic mutation. In the SCEA, we modified the structure of population in the DSE that we use a subpopulation to evaluate a partial solution locally and several subpopulations to construct a full solution. Moreover, the proposed SCEA uses the self-constructing learning algorithm to construct the RWNFN model automatically that is based on the input training data to decide the input partition. The DSE is using to carry out parameter learning of the RWNFN model in the SECA. Although the DSE and SCEA can obtain good performance in the simulations, however in some real-world applications exact training data may be expensive or even impossible to obtain. To solve this problem, the reinforcement evolution learning method called the reinforcement dynamic symbiotic evolution (R-DSE) is proposed. In the R-DSE, we formulate a number of time steps before failure occurs as the fitness function. The DSE is used as a way to perform parameter learning. In the simulations, efficiency of the proposed supervised and reinforcement learning methods are verified from these results.
|