An Effective Optimization Method for Machine Learning Based on ADAM

A machine is taught by finding the minimum value of the cost function which is induced by learning data. Unfortunately, as the amount of learning increases, the non-liner activation function in the artificial neural network (ANN), the complexity of the artificial intelligence structures, and the cos...

Full description

Bibliographic Details
Main Authors: Dokkyun Yi, Jaehyun Ahn, Sangmin Ji
Format: Article
Language:English
Published: MDPI AG 2020-02-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/10/3/1073
id doaj-b972701a1284486cbb0b2d0869a3d22b
record_format Article
spelling doaj-b972701a1284486cbb0b2d0869a3d22b2020-11-25T02:16:09ZengMDPI AGApplied Sciences2076-34172020-02-01103107310.3390/app10031073app10031073An Effective Optimization Method for Machine Learning Based on ADAMDokkyun Yi0Jaehyun Ahn1Sangmin Ji2Division of Creative Integrated General Studies, Daegu University College, Kyungsan 38453, KoreaDepartment of Mathematics, College of Natural Sciences, Chungnam National University, Daejeon 34134, KoreaDepartment of Mathematics, College of Natural Sciences, Chungnam National University, Daejeon 34134, KoreaA machine is taught by finding the minimum value of the cost function which is induced by learning data. Unfortunately, as the amount of learning increases, the non-liner activation function in the artificial neural network (ANN), the complexity of the artificial intelligence structures, and the cost function’s non-convex complexity all increase. We know that a non-convex function has local minimums, and that the first derivative of the cost function is zero at a local minimum. Therefore, the methods based on a gradient descent optimization do not undergo further change when they fall to a local minimum because they are based on the first derivative of the cost function. This paper introduces a novel optimization method to make machine learning more efficient. In other words, we construct an effective optimization method for non-convex cost function. The proposed method solves the problem of falling into a local minimum by adding the cost function in the parameter update rule of the ADAM method. We prove the convergence of the sequences generated from the proposed method and the superiority of the proposed method by numerical comparison with gradient descent (GD, ADAM, and AdaMax).https://www.mdpi.com/2076-3417/10/3/1073numerical optimizationadammachine learningstochastic gradient methods
collection DOAJ
language English
format Article
sources DOAJ
author Dokkyun Yi
Jaehyun Ahn
Sangmin Ji
spellingShingle Dokkyun Yi
Jaehyun Ahn
Sangmin Ji
An Effective Optimization Method for Machine Learning Based on ADAM
Applied Sciences
numerical optimization
adam
machine learning
stochastic gradient methods
author_facet Dokkyun Yi
Jaehyun Ahn
Sangmin Ji
author_sort Dokkyun Yi
title An Effective Optimization Method for Machine Learning Based on ADAM
title_short An Effective Optimization Method for Machine Learning Based on ADAM
title_full An Effective Optimization Method for Machine Learning Based on ADAM
title_fullStr An Effective Optimization Method for Machine Learning Based on ADAM
title_full_unstemmed An Effective Optimization Method for Machine Learning Based on ADAM
title_sort effective optimization method for machine learning based on adam
publisher MDPI AG
series Applied Sciences
issn 2076-3417
publishDate 2020-02-01
description A machine is taught by finding the minimum value of the cost function which is induced by learning data. Unfortunately, as the amount of learning increases, the non-liner activation function in the artificial neural network (ANN), the complexity of the artificial intelligence structures, and the cost function’s non-convex complexity all increase. We know that a non-convex function has local minimums, and that the first derivative of the cost function is zero at a local minimum. Therefore, the methods based on a gradient descent optimization do not undergo further change when they fall to a local minimum because they are based on the first derivative of the cost function. This paper introduces a novel optimization method to make machine learning more efficient. In other words, we construct an effective optimization method for non-convex cost function. The proposed method solves the problem of falling into a local minimum by adding the cost function in the parameter update rule of the ADAM method. We prove the convergence of the sequences generated from the proposed method and the superiority of the proposed method by numerical comparison with gradient descent (GD, ADAM, and AdaMax).
topic numerical optimization
adam
machine learning
stochastic gradient methods
url https://www.mdpi.com/2076-3417/10/3/1073
work_keys_str_mv AT dokkyunyi aneffectiveoptimizationmethodformachinelearningbasedonadam
AT jaehyunahn aneffectiveoptimizationmethodformachinelearningbasedonadam
AT sangminji aneffectiveoptimizationmethodformachinelearningbasedonadam
AT dokkyunyi effectiveoptimizationmethodformachinelearningbasedonadam
AT jaehyunahn effectiveoptimizationmethodformachinelearningbasedonadam
AT sangminji effectiveoptimizationmethodformachinelearningbasedonadam
_version_ 1724892485453348864