Parameter Continuation with Secant Approximation for Deep Neural Networks

Non-convex optimization of deep neural networks is a well-researched problem. We present a novel application of continuation methods for deep learning optimization that can potentially arrive at a better solution. In our method, we first decompose the original optimization problem into a sequence of...

Full description

Bibliographic Details
Main Author: Pathak, Harsh Nilesh
Other Authors: Kyumin Lee, Reader
Format: Others
Published: Digital WPI 2018
Subjects:
Online Access:https://digitalcommons.wpi.edu/etd-theses/1256
https://digitalcommons.wpi.edu/cgi/viewcontent.cgi?article=2262&context=etd-theses