Doing the Impossible: Why Neural Networks Can Be Trained at All
As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billion...
Main Authors: | Nathan O. Hodas, Panos Stinis |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2018-07-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/fpsyg.2018.01185/full |
Similar Items
-
Limit Theorems as Blessing of Dimensionality: Neural-Oriented Overview
by: Vladik Kreinovich, et al.
Published: (2021-04-01) -
Spatial Spectral Band Selection for Enhanced Hyperspectral Remote Sensing Classification Applications
by: Ruben Moya Torres, et al.
Published: (2020-08-01) -
Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks
by: Shrihari Vasudevan
Published: (2020-05-01) -
Automatic Personality Traits Perception Using Asymmetric Auto-Encoder
by: Effat Jalaeian Zaferani, et al.
Published: (2021-01-01) -
İhtilaf Çözümleme Yöntemi Olarak Mübâhele (Mubāhalah as a Method of Dispute Resolution)
by: Halil Aldemir
Published: (2011-12-01)