Stronger convergence results for deep residual networks: network width scales linearly with training data size
Deep neural networks are highly expressive machine learning models with the ability to interpolate arbitrary datasets. Deep nets are typically optimized via first-order methods, and the optimization process crucially depends on the characteristics of the network as well as the dataset. This work she...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Oxford University Press
2022
|
Subjects: | |
Online Access: | View Fulltext in Publisher |
Summary: | Deep neural networks are highly expressive machine learning models with the ability to interpolate arbitrary datasets. Deep nets are typically optimized via first-order methods, and the optimization process crucially depends on the characteristics of the network as well as the dataset. This work sheds light on the relation between the network size and the properties of the dataset with an emphasis on deep residual networks (ResNets). Our contribution is that if the network Jacobian is full rank, gradient descent for the quadratic loss and smooth activation converges to the global minima even if the network width |
---|---|
ISBN: | 20498772 (ISSN) |
ISSN: | 20498772 (ISSN) |
DOI: | 10.1093/imaiai/iaaa030 |