Gradient Descent Optimization in Deep Learning Model Training Based on Multistage and Method Combination Strategy

Gradient descent is the core and foundation of neural networks, and gradient descent optimization heuristics have greatly accelerated progress in deep learning. Although these methods are simple and effective, how they work remains unknown. Gradient descent optimization in deep learning has become a...

Full description

Bibliographic Details
Main Authors: Chuanlei Zhang, Minda Yao, Wei Chen, Shanwen Zhang, Dufeng Chen, Yuliang Wu
Format: Article
Language:English
Published: Hindawi-Wiley 2021-01-01
Series:Security and Communication Networks
Online Access:http://dx.doi.org/10.1155/2021/9956773
Description
Summary:Gradient descent is the core and foundation of neural networks, and gradient descent optimization heuristics have greatly accelerated progress in deep learning. Although these methods are simple and effective, how they work remains unknown. Gradient descent optimization in deep learning has become a hot research topic. Some research efforts have tried to combine multiple methods to assist network training, but these methods seem to be more empirical, without theoretical guides. In this paper, a framework is proposed to illustrate the principle of combining different gradient descent optimization methods by analyzing several adaptive methods and other learning rate methods. Furthermore, inspired by the principle of warmup, CLR, and SGDR, the concept of multistage is introduced into the field of gradient descent optimization, and a gradient descent optimization strategy in deep learning model training based on multistage and method combination strategy is presented. The effectiveness of the proposed strategy is verified on the massive deep learning network training experiments.
ISSN:1939-0122