Stronger convergence results for deep residual networks: network width scales linearly with training data size

Deep neural networks are highly expressive machine learning models with the ability to interpolate arbitrary datasets. Deep nets are typically optimized via first-order methods, and the optimization process crucially depends on the characteristics of the network as well as the dataset. This work she...

Full description

Bibliographic Details
Main Author: Gulcu, T.C (Author)
Format: Article
Language:English
Published: Oxford University Press 2022
Subjects:
Online Access:View Fulltext in Publisher
LEADER 01704nam a2200181Ia 4500
001 10.1093-imaiai-iaaa030
008 220718s2022 CNT 000 0 und d
020 |a 20498772 (ISSN) 
245 1 0 |a Stronger convergence results for deep residual networks: network width scales linearly with training data size 
260 0 |b Oxford University Press  |c 2022 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1093/imaiai/iaaa030 
520 3 |a Deep neural networks are highly expressive machine learning models with the ability to interpolate arbitrary datasets. Deep nets are typically optimized via first-order methods, and the optimization process crucially depends on the characteristics of the network as well as the dataset. This work sheds light on the relation between the network size and the properties of the dataset with an emphasis on deep residual networks (ResNets). Our contribution is that if the network Jacobian is full rank, gradient descent for the quadratic loss and smooth activation converges to the global minima even if the network width   |  of the ResNet scales linearly with the sample size   |  and logarithmically with the network depth   |.  Consequently, our work is able to provide a theoretical guarantee for the convergence of deep neural networks in the   |m =\varOmega (n,\log H)  |  regime. © 2020 The Author(s) 2019. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. 
650 0 4 |a activation function 
650 0 4 |a deep network optimization 
650 0 4 |a deep residual networks 
650 0 4 |a neural tangent kernel 
700 1 |a Gulcu, T.C.  |e author 
773 |t Information and Inference  |x 20498772 (ISSN)  |g 11 2, 497-532