A new scheme for training ReLU-based multi-layer feedforward neural networks

A new scheme for training Rectified Linear Unit (ReLU) based feedforward neural networks is examined in this thesis. The project starts with the row-by-row updating strategy designed for Single-hidden Layer Feedforward neural Networks (SLFNs). This strategy exploits the properties held by ReLUs and...

Full description

Bibliographic Details
Main Author: Wang, Hao
Format: Others
Language:English
Published: KTH, Skolan för datavetenskap och kommunikation (CSC) 2017
Subjects:
ELM
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217384