Empirical analysis of neural networks training optimisation

A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Mathematical Statistics,School of Statistics and Actuarial Science. October 2016. === Neural networks (NNs) may be characteris...

Full description

Bibliographic Details
Main Author: Kayembe, Mutamba Tonton
Format: Others
Language:en
Published: 2017
Subjects:
Online Access:Kayembe, Mutamba Tonton (2016) Empirical analysis of neural networks training optimisation, University of Witwatersrand, Johannesburg, <http://wiredspace.wits.ac.za/handle/10539/21679>
http://hdl.handle.net/10539/21679
id ndltd-netd.ac.za-oai-union.ndltd.org-wits-oai-wiredspace.wits.ac.za-10539-21679
record_format oai_dc
spelling ndltd-netd.ac.za-oai-union.ndltd.org-wits-oai-wiredspace.wits.ac.za-10539-216792019-05-11T03:39:59Z Empirical analysis of neural networks training optimisation Kayembe, Mutamba Tonton Neural networks (Computer science) Training A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Mathematical Statistics,School of Statistics and Actuarial Science. October 2016. Neural networks (NNs) may be characterised by complex error functions with attributes such as saddle-points, local minima, even-spots and plateaus. This complicates the associated training process in terms of efficiency, convergence and accuracy given that it is done by minimising such complex error functions. This study empirically investigates the performance of two NNs training algorithms which are based on unconstrained and global optimisation theories, i.e. the Resilient propagation (Rprop) and the Conjugate Gradient with Polak-Ribière updates (CGP). It also shows how the network structure plays a role in the training optimisation of NNs. In this regard, various training scenarios are used to classify two protein data, i.e. the Escherichia coli and Yeast data. These training scenarios use varying numbers of hidden nodes and training iterations. The results show that Rprop outperforms CGP. Moreover, it appears that the performance of classifiers varies under various training scenarios. LG2017 2017-01-19T06:42:47Z 2017-01-19T06:42:47Z 2016 Thesis Kayembe, Mutamba Tonton (2016) Empirical analysis of neural networks training optimisation, University of Witwatersrand, Johannesburg, <http://wiredspace.wits.ac.za/handle/10539/21679> http://hdl.handle.net/10539/21679 en Online resource (xiv, 145 leaves) application/pdf application/pdf
collection NDLTD
language en
format Others
sources NDLTD
topic Neural networks (Computer science)
Training
spellingShingle Neural networks (Computer science)
Training
Kayembe, Mutamba Tonton
Empirical analysis of neural networks training optimisation
description A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Mathematical Statistics,School of Statistics and Actuarial Science. October 2016. === Neural networks (NNs) may be characterised by complex error functions with attributes such as saddle-points, local minima, even-spots and plateaus. This complicates the associated training process in terms of efficiency, convergence and accuracy given that it is done by minimising such complex error functions. This study empirically investigates the performance of two NNs training algorithms which are based on unconstrained and global optimisation theories, i.e. the Resilient propagation (Rprop) and the Conjugate Gradient with Polak-Ribière updates (CGP). It also shows how the network structure plays a role in the training optimisation of NNs. In this regard, various training scenarios are used to classify two protein data, i.e. the Escherichia coli and Yeast data. These training scenarios use varying numbers of hidden nodes and training iterations. The results show that Rprop outperforms CGP. Moreover, it appears that the performance of classifiers varies under various training scenarios. === LG2017
author Kayembe, Mutamba Tonton
author_facet Kayembe, Mutamba Tonton
author_sort Kayembe, Mutamba Tonton
title Empirical analysis of neural networks training optimisation
title_short Empirical analysis of neural networks training optimisation
title_full Empirical analysis of neural networks training optimisation
title_fullStr Empirical analysis of neural networks training optimisation
title_full_unstemmed Empirical analysis of neural networks training optimisation
title_sort empirical analysis of neural networks training optimisation
publishDate 2017
url Kayembe, Mutamba Tonton (2016) Empirical analysis of neural networks training optimisation, University of Witwatersrand, Johannesburg, <http://wiredspace.wits.ac.za/handle/10539/21679>
http://hdl.handle.net/10539/21679
work_keys_str_mv AT kayembemutambatonton empiricalanalysisofneuralnetworkstrainingoptimisation
_version_ 1719081079073144832