Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
We study the convergence of gradient flows related to learning deep linear neural networks (where the activation function is the identity map) from data. In this case, the composition of the network layers amounts to simply multiplying the weight matrices of all layers together, resulting in an over...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Oxford University Press
2022
|
Subjects: | |
Online Access: | View Fulltext in Publisher |