Predicting parameters in deep learning
The recent success of large and deep neural network models has motivated the training of even larger and deeper networks with millions of parameters. Training these models usually requires parallel training methods where communicating large number of parameters becomes one of the main bottlenecks. W...
Main Author: | |
---|---|
Language: | English |
Published: |
University of British Columbia
2014
|
Online Access: | http://hdl.handle.net/2429/50999 |
Summary: | The recent success of large and deep neural network models has motivated the training of even larger and deeper networks with millions of parameters. Training these models usually requires parallel training methods where communicating large number of parameters becomes one of the main bottlenecks. We show that many deep learning models are over-parameterized and their learned features can be predicted given only a small fraction of their parameters. We then propose a method which exploits this fact during the training to reduce the number of parameters that need to be learned. Our method is orthogonal to the choice of network architecture and can be applied in a wide variety of neural network architectures and application areas. We evaluate this technique using various experiments in image and speech recognition and show that we can only learn a fraction of the parameters (up to 10% in some cases) and predict the rest without a significant loss in the predictive accuracy of the model. === Science, Faculty of === Computer Science, Department of === Graduate |
---|