Reducing parameter space for neural network training
ABSTRACT: For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the thre...
Main Authors: | Tong Qin, Ling Zhou, Dongbin Xiu |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2020-03-01
|
Series: | Theoretical and Applied Mechanics Letters |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2095034920300301 |
Similar Items
-
High-dimensional neural feature design for layer-wise reduction of training cost
by: Alireza M. Javid, et al.
Published: (2020-09-01) -
Rectified Exponential Units for Convolutional Neural Networks
by: Yao Ying, et al.
Published: (2019-01-01) -
The Rough Linear Approximate Space and Soft Linear Space
by: Ma Yingcang, et al.
Published: (2016-04-01) -
Chebyshev centers and best simultaneous approximation in normed linear spaces
by: Taylor, Barbara J.
Published: (1988) -
DC-Link Voltage Disturbance Rejection Strategy of PWM Rectifiers Based on Reduced-Order LESO
by: Zhifeng Pan, et al.
Published: (2019-01-01)