A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence

The process of machine learning is to find parameters that minimize the cost function constructed by learning the data. This is called optimization and the parameters at that time are called the optimal parameters in neural networks. In the process of finding the optimization, there were attempts to...

Full description

Bibliographic Details
Main Authors: Jieun Park, Dokkyun Yi, Sangmin Ji
Format: Article
Language:English
Published: MDPI AG 2020-04-01
Series:Symmetry
Subjects:
Online Access:https://www.mdpi.com/2073-8994/12/4/660
_version_ 1797569913357860864
author Jieun Park
Dokkyun Yi
Sangmin Ji
author_facet Jieun Park
Dokkyun Yi
Sangmin Ji
author_sort Jieun Park
collection DOAJ
description The process of machine learning is to find parameters that minimize the cost function constructed by learning the data. This is called optimization and the parameters at that time are called the optimal parameters in neural networks. In the process of finding the optimization, there were attempts to solve the symmetric optimization or initialize the parameters symmetrically. Furthermore, in order to obtain the optimal parameters, the existing methods have used methods in which the learning rate is decreased over the iteration time or is changed according to a certain ratio. These methods are a monotonically decreasing method at a constant rate according to the iteration time. Our idea is to make the learning rate changeable unlike the monotonically decreasing method. We introduce a method to find the optimal parameters which adaptively changes the learning rate according to the value of the cost function. Therefore, when the cost function is optimized, the learning is complete and the optimal parameters are obtained. This paper proves that the method ensures convergence to the optimal parameters. This means that our method achieves a minimum of the cost function (or effective learning). Numerical experiments demonstrate that learning is good effective when using the proposed learning rate schedule in various situations.
first_indexed 2024-03-10T20:18:02Z
format Article
id doaj.art-1a54bcf3652d40ef810a3243e878465e
institution Directory Open Access Journal
issn 2073-8994
language English
last_indexed 2024-03-10T20:18:02Z
publishDate 2020-04-01
publisher MDPI AG
record_format Article
series Symmetry
spelling doaj.art-1a54bcf3652d40ef810a3243e878465e2023-11-19T22:21:23ZengMDPI AGSymmetry2073-89942020-04-0112466010.3390/sym12040660A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s ConvergenceJieun Park0Dokkyun Yi1Sangmin Ji2Seongsan Liberal Arts College, Daegu University, Kyungsan 38453, KoreaSeongsan Liberal Arts College, Daegu University, Kyungsan 38453, KoreaDepartment of Mathematics, College of Natural Sciences, Chungnam National University, Daejeon 34134, KoreaThe process of machine learning is to find parameters that minimize the cost function constructed by learning the data. This is called optimization and the parameters at that time are called the optimal parameters in neural networks. In the process of finding the optimization, there were attempts to solve the symmetric optimization or initialize the parameters symmetrically. Furthermore, in order to obtain the optimal parameters, the existing methods have used methods in which the learning rate is decreased over the iteration time or is changed according to a certain ratio. These methods are a monotonically decreasing method at a constant rate according to the iteration time. Our idea is to make the learning rate changeable unlike the monotonically decreasing method. We introduce a method to find the optimal parameters which adaptively changes the learning rate according to the value of the cost function. Therefore, when the cost function is optimized, the learning is complete and the optimal parameters are obtained. This paper proves that the method ensures convergence to the optimal parameters. This means that our method achieves a minimum of the cost function (or effective learning). Numerical experiments demonstrate that learning is good effective when using the proposed learning rate schedule in various situations.https://www.mdpi.com/2073-8994/12/4/660machine learningnumerical optimizationlearning rateconvergence
spellingShingle Jieun Park
Dokkyun Yi
Sangmin Ji
A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence
Symmetry
machine learning
numerical optimization
learning rate
convergence
title A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence
title_full A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence
title_fullStr A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence
title_full_unstemmed A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence
title_short A Novel Learning Rate Schedule in Optimization for Neural Networks and It’s Convergence
title_sort novel learning rate schedule in optimization for neural networks and it s convergence
topic machine learning
numerical optimization
learning rate
convergence
url https://www.mdpi.com/2073-8994/12/4/660
work_keys_str_mv AT jieunpark anovellearningratescheduleinoptimizationforneuralnetworksanditsconvergence
AT dokkyunyi anovellearningratescheduleinoptimizationforneuralnetworksanditsconvergence
AT sangminji anovellearningratescheduleinoptimizationforneuralnetworksanditsconvergence
AT jieunpark novellearningratescheduleinoptimizationforneuralnetworksanditsconvergence
AT dokkyunyi novellearningratescheduleinoptimizationforneuralnetworksanditsconvergence
AT sangminji novellearningratescheduleinoptimizationforneuralnetworksanditsconvergence