Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications
碩士 === 朝陽科技大學 === 資訊工程系碩士班 === 93 === In this thesis, supervised and reinforcement evolution learning methods are proposed for recurrent wavelet neuro-fuzzy networks (RWNFN). The RWNFN model is a feedforward multi-layer network which integrates traditional Takagi-Sugeno-Kang (TSK) fuzzy model and th...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2005
|
Online Access: | http://ndltd.ncl.edu.tw/handle/mpf7zx |
id |
ndltd-TW-093CYUT5392012 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-093CYUT53920122019-05-15T19:19:45Z http://ndltd.ncl.edu.tw/handle/mpf7zx Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications 教導式及增強式進化學習用於遞迴式小波類神經模糊網路及其應用 Yong-Ji Xu 徐永吉 碩士 朝陽科技大學 資訊工程系碩士班 93 In this thesis, supervised and reinforcement evolution learning methods are proposed for recurrent wavelet neuro-fuzzy networks (RWNFN). The RWNFN model is a feedforward multi-layer network which integrates traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). The recurrent property comes from feeding the internal variables, derived from membership function matched degree, back to itself. In the learning algorithm, this thesis proposed supervised and reinforcement evolution learning methods. The supervised evolution learning methods consist of the dynamic symbiotic evolution (DSE) and the self-constructing evolution algorithm (SCEA). In the DSE, the better chromosomes will be initially generated while the better mutation points will be determined for performing dynamic mutation. In the SCEA, we modified the structure of population in the DSE that we use a subpopulation to evaluate a partial solution locally and several subpopulations to construct a full solution. Moreover, the proposed SCEA uses the self-constructing learning algorithm to construct the RWNFN model automatically that is based on the input training data to decide the input partition. The DSE is using to carry out parameter learning of the RWNFN model in the SECA. Although the DSE and SCEA can obtain good performance in the simulations, however in some real-world applications exact training data may be expensive or even impossible to obtain. To solve this problem, the reinforcement evolution learning method called the reinforcement dynamic symbiotic evolution (R-DSE) is proposed. In the R-DSE, we formulate a number of time steps before failure occurs as the fitness function. The DSE is used as a way to perform parameter learning. In the simulations, efficiency of the proposed supervised and reinforcement learning methods are verified from these results. Cheng-Jian Lin 林正堅 2005 學位論文 ; thesis 127 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 朝陽科技大學 === 資訊工程系碩士班 === 93 === In this thesis, supervised and reinforcement evolution learning methods are proposed for recurrent wavelet neuro-fuzzy networks (RWNFN). The RWNFN model is a feedforward multi-layer network which integrates traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). The recurrent property comes from feeding the internal variables, derived from membership function matched degree, back to itself. In the learning algorithm, this thesis proposed supervised and reinforcement evolution learning methods. The supervised evolution learning methods consist of the dynamic symbiotic evolution (DSE) and the self-constructing evolution algorithm (SCEA). In the DSE, the better chromosomes will be initially generated while the better mutation points will be determined for performing dynamic mutation. In the SCEA, we modified the structure of population in the DSE that we use a subpopulation to evaluate a partial solution locally and several subpopulations to construct a full solution. Moreover, the proposed SCEA uses the self-constructing learning algorithm to construct the RWNFN model automatically that is based on the input training data to decide the input partition. The DSE is using to carry out parameter learning of the RWNFN model in the SECA. Although the DSE and SCEA can obtain good performance in the simulations, however in some real-world applications exact training data may be expensive or even impossible to obtain. To solve this problem, the reinforcement evolution learning method called the reinforcement dynamic symbiotic evolution (R-DSE) is proposed. In the R-DSE, we formulate a number of time steps before failure occurs as the fitness function. The DSE is used as a way to perform parameter learning. In the simulations, efficiency of the proposed supervised and reinforcement learning methods are verified from these results.
|
author2 |
Cheng-Jian Lin |
author_facet |
Cheng-Jian Lin Yong-Ji Xu 徐永吉 |
author |
Yong-Ji Xu 徐永吉 |
spellingShingle |
Yong-Ji Xu 徐永吉 Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications |
author_sort |
Yong-Ji Xu |
title |
Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications |
title_short |
Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications |
title_full |
Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications |
title_fullStr |
Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications |
title_full_unstemmed |
Supervised and Reinforcement Evolution Learning for Recurrent Wavelet Neuro-Fuzzy Networks and Its Applications |
title_sort |
supervised and reinforcement evolution learning for recurrent wavelet neuro-fuzzy networks and its applications |
publishDate |
2005 |
url |
http://ndltd.ncl.edu.tw/handle/mpf7zx |
work_keys_str_mv |
AT yongjixu supervisedandreinforcementevolutionlearningforrecurrentwaveletneurofuzzynetworksanditsapplications AT xúyǒngjí supervisedandreinforcementevolutionlearningforrecurrentwaveletneurofuzzynetworksanditsapplications AT yongjixu jiàodǎoshìjízēngqiángshìjìnhuàxuéxíyòngyúdìhuíshìxiǎobōlèishénjīngmóhúwǎnglùjíqíyīngyòng AT xúyǒngjí jiàodǎoshìjízēngqiángshìjìnhuàxuéxíyòngyúdìhuíshìxiǎobōlèishénjīngmóhúwǎnglùjíqíyīngyòng |
_version_ |
1719088568388812800 |