Parameter Continuation with Secant Approximation for Deep Neural Networks
Non-convex optimization of deep neural networks is a well-researched problem. We present a novel application of continuation methods for deep learning optimization that can potentially arrive at a better solution. In our method, we first decompose the original optimization problem into a sequence of...
Main Author: | Pathak, Harsh Nilesh |
---|---|
Other Authors: | Kyumin Lee, Reader |
Format: | Others |
Published: |
Digital WPI
2018
|
Subjects: | |
Online Access: | https://digitalcommons.wpi.edu/etd-theses/1256 https://digitalcommons.wpi.edu/cgi/viewcontent.cgi?article=2262&context=etd-theses |
Similar Items
-
Homotopy Analysis-Based Hybrid Genetic Algorithm and Secant Method to Solve IVP and Higher-Order BVP
by: Hala A. Omar
Published: (2021-01-01) -
A modified hyperbolic secant distribution
by: Panu Thongchan, et al.
Published: (2017-02-01) -
Application of a Generalized Secant Method to Nonlinear Equations with Complex Roots
by: Avram Sidi
Published: (2021-07-01) -
Two-step secant type method with approximation of the inverse operator
by: S.M. Shakhno, et al.
Published: (2013-06-01) -
Two-step secant type method with approximation of the inverse operator
by: Shakhno S.M., et al.
Published: (2013-06-01)