On the training of feedforward neural networks.

by Hau-san Wong. === Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. === Includes bibliographical references (leaves [178-183]). === Chapter 1 --- INTRODUCTION === Chapter 1.1 --- Learning versus Explicit Programming --- p.1-1 === Chapter 1.2 --- Artificial Neural Networks --- p.1-2 === C...

Full description

Bibliographic Details
Other Authors: Wong, Hau-san.
Format: Others
Language:English
Published: Chinese University of Hong Kong 1993
Subjects:
Online Access:http://library.cuhk.edu.hk/record=b5887738
http://repository.lib.cuhk.edu.hk/en/item/cuhk-319198
id ndltd-cuhk.edu.hk-oai-cuhk-dr-cuhk_319198
record_format oai_dc
collection NDLTD
language English
format Others
sources NDLTD
topic Neural networks (Computer science)
Feedforward control systems
Computer algorithms
Machine learning
spellingShingle Neural networks (Computer science)
Feedforward control systems
Computer algorithms
Machine learning
On the training of feedforward neural networks.
description by Hau-san Wong. === Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. === Includes bibliographical references (leaves [178-183]). === Chapter 1 --- INTRODUCTION === Chapter 1.1 --- Learning versus Explicit Programming --- p.1-1 === Chapter 1.2 --- Artificial Neural Networks --- p.1-2 === Chapter 1.3 --- Learning in ANN --- p.1-3 === Chapter 1.4 --- Problems of Learning in BP Networks --- p.1-5 === Chapter 1.5 --- Dynamic Node Architecture for BP Networks --- p.1-7 === Chapter 1.6 --- Incremental Learning --- p.1-10 === Chapter 1.7 --- Research Objective and Thesis Organization --- p.1-11 === Chapter 2 --- THE FEEDFORWARD MULTILAYER NEURAL NETWORK === Chapter 2.1 --- The Perceptron --- p.2-1 === Chapter 2.2 --- The Generalization of the Perceptron --- p.2-4 === Chapter 2.3 --- The Multilayer Feedforward Network --- p.2-5 === Chapter 3 --- SOLUTIONS TO THE BP LEARNING PROBLEM === Chapter 3.1 --- Introduction --- p.3-1 === Chapter 3.2 --- Attempts in the Establishment of a Viable Hidden Representation Model --- p.3-5 === Chapter 3.3 --- Dynamic Node Creation Algorithms --- p.3-9 === Chapter 3.4 --- Concluding Remarks --- p.3-15 === Chapter 4 --- THE GROWTH ALGORITHM FOR NEURAL NETWORKS === Chapter 4.1 --- Introduction --- p.4-2 === Chapter 4.2 --- The Radial Basis Function --- p.4-6 === Chapter 4.3 --- The Additional Input Node and the Modified Nonlinearity --- p.4-9 === Chapter 4.4 --- The Initialization of the New Hidden Node --- p.4-11 === Chapter 4.5 --- Initialization of the First Node --- p.4-15 === Chapter 4.6 --- Practical Considerations for the Growth Algorithm --- p.4-18 === Chapter 4.7 --- The Convergence Proof for the Growth Algorithm --- p.4-20 === Chapter 4.8 --- The Flow of the Growth Algorithm --- p.4-21 === Chapter 4.9 --- Experimental Results and Performance Analysis --- p.4-21 === Chapter 4.10 --- Concluding Remarks --- p.4-33 === Chapter 5 --- KNOWLEDGE REPRESENTATION IN NEURAL NETWORKS === Chapter 5.1 --- An Alternative Perspective to Knowledge Representation in Neural Network: The Temporal Vector (T-Vector) Approach --- p.5-1 === Chapter 5.2 --- Prior Research Works in the T-Vector Approach --- p.5-2 === Chapter 5.3 --- Formulation of the T-Vector Approach --- p.5-3 === Chapter 5.4 --- Relation of the Hidden T-Vectors to the Output T-Vectors --- p.5-6 === Chapter 5.5 --- Relation of the Hidden T-Vectors to the Input T-Vectors --- p.5-10 === Chapter 5.6 --- An Inspiration for a New Training Algorithm from the Current Model --- p.5-12 === Chapter 6 --- THE DETERMINISTIC TRAINING ALGORITHM FOR NEURAL NETWORKS === Chapter 6.1 --- Introduction --- p.6-1 === Chapter 6.2 --- The Linear Independency Requirement for the Hidden T-Vectors --- p.6-3 === Chapter 6.3 --- Inspiration of the Current Work from the Barmann T-Vector Model --- p.6-5 === Chapter 6.4 --- General Framework of Dynamic Node Creation Algorithm --- p.6-10 === Chapter 6.5 --- The Deterministic Initialization Scheme for the New Hidden Nodes === Chapter 6.5.1 --- Introduction --- p.6-12 === Chapter 6.5.2 --- Determination of the Target T-Vector === Chapter 6.5.2.1 --- Introduction --- p.6-15 === Chapter 6.5.2.2 --- Modelling of the Target Vector βQhQ --- p.6-16 === Chapter 6.5.2.3 --- Near-Linearity Condition for the Sigmoid Function --- p.6-18 === Chapter 6.5.3 --- Preparation for the BP Fine-Tuning Process --- p.6-24 === Chapter 6.5.4 --- Determination of the Target Hidden T-Vector --- p.6-28 === Chapter 6.5.5 --- Determination of the Hidden Weights --- p.6-29 === Chapter 6.5.6 --- Determination of the Output Weights --- p.6-30 === Chapter 6.6 --- Linear Independency Assurance for the New Hidden T-Vector --- p.6-30 === Chapter 6.7 --- Extension to the Multi-Output Case --- p.6-32 === Chapter 6.8 --- Convergence Proof for the Deterministic Algorithm --- p.6-35 === Chapter 6.9 --- The Flow of the Deterministic Dynamic Node Creation Algorithm --- p.6-36 === Chapter 6.10 --- Experimental Results and Performance Analysis --- p.6-36 === Chapter 6.11 --- Concluding Remarks --- p.6-50 === Chapter 7 --- THE GENERALIZATION MEASURE MONITORING SCHEME === Chapter 7.1 --- The Problem of Generalization for Neural Networks --- p.7-1 === Chapter 7.2 --- Prior Attempts in Solving the Generalization Problem --- p.7-2 === Chapter 7.3 --- The Generalization Measure --- p.7-4 === Chapter 7.4 --- The Adoption of the Generalization Measure to the Deterministic Algorithm --- p.7-5 === Chapter 7.5 --- Monitoring of the Generalization Measure --- p.7-6 === Chapter 7.6 --- Correspondence between the Generalization Measure and the Generalization Capability of the Network --- p.7-8 === Chapter 7.7 --- Experimental Results and Performance Analysis --- p.7-12 === Chapter 7.8 --- Concluding Remarks --- p.7-16 === Chapter 8 --- THE ESTIMATION OF THE INITIAL HIDDEN LAYER SIZE === Chapter 8.1 --- The Need for an Initial Hidden Layer Size Estimation --- p.8-1 === Chapter 8.2 --- The Initial Hidden Layer Estimation Scheme --- p.8-2 === Chapter 8.3 --- The Extension of the Estimation Procedure to the Multi-Output Network --- p.8-6 === Chapter 8.4 --- Experimental Results and Performance Analysis --- p.8-6 === Chapter 8.5 --- Concluding Remarks --- p.8-16 === Chapter 9 --- CONCLUSION === Chapter 9.1 --- Contributions --- p.9-1 === Chapter 9.2 --- Suggestions for Further Research --- p.9-3 === REFERENCES --- p.R-1 === APPENDIX --- p.A-1
author2 Wong, Hau-san.
author_facet Wong, Hau-san.
title On the training of feedforward neural networks.
title_short On the training of feedforward neural networks.
title_full On the training of feedforward neural networks.
title_fullStr On the training of feedforward neural networks.
title_full_unstemmed On the training of feedforward neural networks.
title_sort on the training of feedforward neural networks.
publisher Chinese University of Hong Kong
publishDate 1993
url http://library.cuhk.edu.hk/record=b5887738
http://repository.lib.cuhk.edu.hk/en/item/cuhk-319198
_version_ 1718978776645238784
spelling ndltd-cuhk.edu.hk-oai-cuhk-dr-cuhk_3191982019-02-19T03:50:39Z On the training of feedforward neural networks. Neural networks (Computer science) Feedforward control systems Computer algorithms Machine learning by Hau-san Wong. Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. Includes bibliographical references (leaves [178-183]). Chapter 1 --- INTRODUCTION Chapter 1.1 --- Learning versus Explicit Programming --- p.1-1 Chapter 1.2 --- Artificial Neural Networks --- p.1-2 Chapter 1.3 --- Learning in ANN --- p.1-3 Chapter 1.4 --- Problems of Learning in BP Networks --- p.1-5 Chapter 1.5 --- Dynamic Node Architecture for BP Networks --- p.1-7 Chapter 1.6 --- Incremental Learning --- p.1-10 Chapter 1.7 --- Research Objective and Thesis Organization --- p.1-11 Chapter 2 --- THE FEEDFORWARD MULTILAYER NEURAL NETWORK Chapter 2.1 --- The Perceptron --- p.2-1 Chapter 2.2 --- The Generalization of the Perceptron --- p.2-4 Chapter 2.3 --- The Multilayer Feedforward Network --- p.2-5 Chapter 3 --- SOLUTIONS TO THE BP LEARNING PROBLEM Chapter 3.1 --- Introduction --- p.3-1 Chapter 3.2 --- Attempts in the Establishment of a Viable Hidden Representation Model --- p.3-5 Chapter 3.3 --- Dynamic Node Creation Algorithms --- p.3-9 Chapter 3.4 --- Concluding Remarks --- p.3-15 Chapter 4 --- THE GROWTH ALGORITHM FOR NEURAL NETWORKS Chapter 4.1 --- Introduction --- p.4-2 Chapter 4.2 --- The Radial Basis Function --- p.4-6 Chapter 4.3 --- The Additional Input Node and the Modified Nonlinearity --- p.4-9 Chapter 4.4 --- The Initialization of the New Hidden Node --- p.4-11 Chapter 4.5 --- Initialization of the First Node --- p.4-15 Chapter 4.6 --- Practical Considerations for the Growth Algorithm --- p.4-18 Chapter 4.7 --- The Convergence Proof for the Growth Algorithm --- p.4-20 Chapter 4.8 --- The Flow of the Growth Algorithm --- p.4-21 Chapter 4.9 --- Experimental Results and Performance Analysis --- p.4-21 Chapter 4.10 --- Concluding Remarks --- p.4-33 Chapter 5 --- KNOWLEDGE REPRESENTATION IN NEURAL NETWORKS Chapter 5.1 --- An Alternative Perspective to Knowledge Representation in Neural Network: The Temporal Vector (T-Vector) Approach --- p.5-1 Chapter 5.2 --- Prior Research Works in the T-Vector Approach --- p.5-2 Chapter 5.3 --- Formulation of the T-Vector Approach --- p.5-3 Chapter 5.4 --- Relation of the Hidden T-Vectors to the Output T-Vectors --- p.5-6 Chapter 5.5 --- Relation of the Hidden T-Vectors to the Input T-Vectors --- p.5-10 Chapter 5.6 --- An Inspiration for a New Training Algorithm from the Current Model --- p.5-12 Chapter 6 --- THE DETERMINISTIC TRAINING ALGORITHM FOR NEURAL NETWORKS Chapter 6.1 --- Introduction --- p.6-1 Chapter 6.2 --- The Linear Independency Requirement for the Hidden T-Vectors --- p.6-3 Chapter 6.3 --- Inspiration of the Current Work from the Barmann T-Vector Model --- p.6-5 Chapter 6.4 --- General Framework of Dynamic Node Creation Algorithm --- p.6-10 Chapter 6.5 --- The Deterministic Initialization Scheme for the New Hidden Nodes Chapter 6.5.1 --- Introduction --- p.6-12 Chapter 6.5.2 --- Determination of the Target T-Vector Chapter 6.5.2.1 --- Introduction --- p.6-15 Chapter 6.5.2.2 --- Modelling of the Target Vector βQhQ --- p.6-16 Chapter 6.5.2.3 --- Near-Linearity Condition for the Sigmoid Function --- p.6-18 Chapter 6.5.3 --- Preparation for the BP Fine-Tuning Process --- p.6-24 Chapter 6.5.4 --- Determination of the Target Hidden T-Vector --- p.6-28 Chapter 6.5.5 --- Determination of the Hidden Weights --- p.6-29 Chapter 6.5.6 --- Determination of the Output Weights --- p.6-30 Chapter 6.6 --- Linear Independency Assurance for the New Hidden T-Vector --- p.6-30 Chapter 6.7 --- Extension to the Multi-Output Case --- p.6-32 Chapter 6.8 --- Convergence Proof for the Deterministic Algorithm --- p.6-35 Chapter 6.9 --- The Flow of the Deterministic Dynamic Node Creation Algorithm --- p.6-36 Chapter 6.10 --- Experimental Results and Performance Analysis --- p.6-36 Chapter 6.11 --- Concluding Remarks --- p.6-50 Chapter 7 --- THE GENERALIZATION MEASURE MONITORING SCHEME Chapter 7.1 --- The Problem of Generalization for Neural Networks --- p.7-1 Chapter 7.2 --- Prior Attempts in Solving the Generalization Problem --- p.7-2 Chapter 7.3 --- The Generalization Measure --- p.7-4 Chapter 7.4 --- The Adoption of the Generalization Measure to the Deterministic Algorithm --- p.7-5 Chapter 7.5 --- Monitoring of the Generalization Measure --- p.7-6 Chapter 7.6 --- Correspondence between the Generalization Measure and the Generalization Capability of the Network --- p.7-8 Chapter 7.7 --- Experimental Results and Performance Analysis --- p.7-12 Chapter 7.8 --- Concluding Remarks --- p.7-16 Chapter 8 --- THE ESTIMATION OF THE INITIAL HIDDEN LAYER SIZE Chapter 8.1 --- The Need for an Initial Hidden Layer Size Estimation --- p.8-1 Chapter 8.2 --- The Initial Hidden Layer Estimation Scheme --- p.8-2 Chapter 8.3 --- The Extension of the Estimation Procedure to the Multi-Output Network --- p.8-6 Chapter 8.4 --- Experimental Results and Performance Analysis --- p.8-6 Chapter 8.5 --- Concluding Remarks --- p.8-16 Chapter 9 --- CONCLUSION Chapter 9.1 --- Contributions --- p.9-1 Chapter 9.2 --- Suggestions for Further Research --- p.9-3 REFERENCES --- p.R-1 APPENDIX --- p.A-1 Chinese University of Hong Kong Wong, Hau-san. Chinese University of Hong Kong Graduate School. Division of Electronic Engineering. 1993 Text bibliography print ix, [185] leaves : ill. ; 30 cm. cuhk:319198 http://library.cuhk.edu.hk/record=b5887738 eng Use of this resource is governed by the terms and conditions of the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License (http://creativecommons.org/licenses/by-nc-nd/4.0/) http://repository.lib.cuhk.edu.hk/en/islandora/object/cuhk%3A319198/datastream/TN/view/On%20the%20training%20of%20feedforward%20neural%20networks.jpghttp://repository.lib.cuhk.edu.hk/en/item/cuhk-319198