The Study of the Learning Ability of Multi-layer Neural Networks

碩士 === 國立臺灣大學 === 資訊工程研究所 === 82 === The objective of this research is to propose methods on solving the learning problems of multi-layer neural networks. The most well-known and commonly used learning algorithm is Back Propagation (BP) alg...

Full description

Bibliographic Details
Main Authors: Yu,Wen-Jen, 于文貞
Other Authors: Liou,Cheng-Yuan
Format: Others
Language:zh-TW
Published: 1994
Online Access:http://ndltd.ncl.edu.tw/handle/62408512533758493389
id ndltd-TW-082NTU00392060
record_format oai_dc
spelling ndltd-TW-082NTU003920602016-07-18T04:09:33Z http://ndltd.ncl.edu.tw/handle/62408512533758493389 The Study of the Learning Ability of Multi-layer Neural Networks 多層神經網路學習能力之研究 Yu,Wen-Jen 于文貞 碩士 國立臺灣大學 資訊工程研究所 82 The objective of this research is to propose methods on solving the learning problems of multi-layer neural networks. The most well-known and commonly used learning algorithm is Back Propagation (BP) algorithm. There are three main drawbacks of BP: 1. the slowness of the learning speed, 2. the convergence to local minima, and 3. the absence of any theoretical result, allowing for a priori determination of an optimal network architecture for a given task. To solve these problems, we propose three methods: The first is to initialize weights in multi-layer quadratic sigmoid networks; The second is to learn in successive residual space; The third is using the topology preserving maps formmed in MLPs. These methods can be applied to pattern recognition problems espically when the training patterns have lined structure. Liou,Cheng-Yuan 劉長遠 1994 學位論文 ; thesis 78 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立臺灣大學 === 資訊工程研究所 === 82 === The objective of this research is to propose methods on solving the learning problems of multi-layer neural networks. The most well-known and commonly used learning algorithm is Back Propagation (BP) algorithm. There are three main drawbacks of BP: 1. the slowness of the learning speed, 2. the convergence to local minima, and 3. the absence of any theoretical result, allowing for a priori determination of an optimal network architecture for a given task. To solve these problems, we propose three methods: The first is to initialize weights in multi-layer quadratic sigmoid networks; The second is to learn in successive residual space; The third is using the topology preserving maps formmed in MLPs. These methods can be applied to pattern recognition problems espically when the training patterns have lined structure.
author2 Liou,Cheng-Yuan
author_facet Liou,Cheng-Yuan
Yu,Wen-Jen
于文貞
author Yu,Wen-Jen
于文貞
spellingShingle Yu,Wen-Jen
于文貞
The Study of the Learning Ability of Multi-layer Neural Networks
author_sort Yu,Wen-Jen
title The Study of the Learning Ability of Multi-layer Neural Networks
title_short The Study of the Learning Ability of Multi-layer Neural Networks
title_full The Study of the Learning Ability of Multi-layer Neural Networks
title_fullStr The Study of the Learning Ability of Multi-layer Neural Networks
title_full_unstemmed The Study of the Learning Ability of Multi-layer Neural Networks
title_sort study of the learning ability of multi-layer neural networks
publishDate 1994
url http://ndltd.ncl.edu.tw/handle/62408512533758493389
work_keys_str_mv AT yuwenjen thestudyofthelearningabilityofmultilayerneuralnetworks
AT yúwénzhēn thestudyofthelearningabilityofmultilayerneuralnetworks
AT yuwenjen duōcéngshénjīngwǎnglùxuéxínénglìzhīyánjiū
AT yúwénzhēn duōcéngshénjīngwǎnglùxuéxínénglìzhīyánjiū
AT yuwenjen studyofthelearningabilityofmultilayerneuralnetworks
AT yúwénzhēn studyofthelearningabilityofmultilayerneuralnetworks
_version_ 1718352404630470656