Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, a...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-10-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/9/21/4568 |
_version_ | 1819016160755056640 |
---|---|
author | Hyeyoung Park Kwanyong Lee |
author_facet | Hyeyoung Park Kwanyong Lee |
author_sort | Hyeyoung Park |
collection | DOAJ |
description | Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods. |
first_indexed | 2024-12-21T02:43:13Z |
format | Article |
id | doaj.art-8fee09772d2a492f84346228b5edc7be |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-12-21T02:43:13Z |
publishDate | 2019-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-8fee09772d2a492f84346228b5edc7be2022-12-21T19:18:38ZengMDPI AGApplied Sciences2076-34172019-10-01921456810.3390/app9214568app9214568Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch ModeHyeyoung Park0Kwanyong Lee1School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, KoreaDepartment of Computer Science, Korea National Open University, Seoul 03087, KoreaGradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods.https://www.mdpi.com/2076-3417/9/21/4568gradient descent learning algorithmnatural gradientstochastic neural networksonline learning modemini-batch learning mode |
spellingShingle | Hyeyoung Park Kwanyong Lee Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode Applied Sciences gradient descent learning algorithm natural gradient stochastic neural networks online learning mode mini-batch learning mode |
title | Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode |
title_full | Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode |
title_fullStr | Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode |
title_full_unstemmed | Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode |
title_short | Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode |
title_sort | adaptive natural gradient method for learning of stochastic neural networks in mini batch mode |
topic | gradient descent learning algorithm natural gradient stochastic neural networks online learning mode mini-batch learning mode |
url | https://www.mdpi.com/2076-3417/9/21/4568 |
work_keys_str_mv | AT hyeyoungpark adaptivenaturalgradientmethodforlearningofstochasticneuralnetworksinminibatchmode AT kwanyonglee adaptivenaturalgradientmethodforlearningofstochasticneuralnetworksinminibatchmode |