A Comparison of Regularization Techniques in Deep Neural Networks
Artificial neural networks (ANN) have attracted significant attention from researchers because many complex problems can be solved by training them. If enough data are provided during the training process, ANNs are capable of achieving good performance results. However, if training data are not enou...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2018-11-01
|
Series: | Symmetry |
Subjects: | |
Online Access: | https://www.mdpi.com/2073-8994/10/11/648 |
_version_ | 1811186351211544576 |
---|---|
author | Ismoilov Nusrat Sung-Bong Jang |
author_facet | Ismoilov Nusrat Sung-Bong Jang |
author_sort | Ismoilov Nusrat |
collection | DOAJ |
description | Artificial neural networks (ANN) have attracted significant attention from researchers because many complex problems can be solved by training them. If enough data are provided during the training process, ANNs are capable of achieving good performance results. However, if training data are not enough, the predefined neural network model suffers from overfitting and underfitting problems. To solve these problems, several regularization techniques have been devised and widely applied to applications and data analysis. However, it is difficult for developers to choose the most suitable scheme for a developing application because there is no information regarding the performance of each scheme. This paper describes comparative research on regularization techniques by evaluating the training and validation errors in a deep neural network model, using a weather dataset. For comparisons, each algorithm was implemented using a recent neural network library of TensorFlow. The experiment results showed that an autoencoder had the worst performance among schemes. When the prediction accuracy was compared, data augmentation and the batch normalization scheme showed better performance than the others. |
first_indexed | 2024-04-11T13:44:03Z |
format | Article |
id | doaj.art-5b5e228930294b7e9b03222d78179269 |
institution | Directory Open Access Journal |
issn | 2073-8994 |
language | English |
last_indexed | 2024-04-11T13:44:03Z |
publishDate | 2018-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Symmetry |
spelling | doaj.art-5b5e228930294b7e9b03222d781792692022-12-22T04:21:08ZengMDPI AGSymmetry2073-89942018-11-01101164810.3390/sym10110648sym10110648A Comparison of Regularization Techniques in Deep Neural NetworksIsmoilov Nusrat0Sung-Bong Jang1Department of Computer Software Engineering, Kumoh National Institute of Technology, Gyeong-Buk 39177, South KoreaDepartment of Industry-Academy, Kumoh National Institute of Technology, Gyeong-Buk 39177, South KoreaArtificial neural networks (ANN) have attracted significant attention from researchers because many complex problems can be solved by training them. If enough data are provided during the training process, ANNs are capable of achieving good performance results. However, if training data are not enough, the predefined neural network model suffers from overfitting and underfitting problems. To solve these problems, several regularization techniques have been devised and widely applied to applications and data analysis. However, it is difficult for developers to choose the most suitable scheme for a developing application because there is no information regarding the performance of each scheme. This paper describes comparative research on regularization techniques by evaluating the training and validation errors in a deep neural network model, using a weather dataset. For comparisons, each algorithm was implemented using a recent neural network library of TensorFlow. The experiment results showed that an autoencoder had the worst performance among schemes. When the prediction accuracy was compared, data augmentation and the batch normalization scheme showed better performance than the others.https://www.mdpi.com/2073-8994/10/11/648deep neural networksregularization methodstemperature predictiontensor flow library |
spellingShingle | Ismoilov Nusrat Sung-Bong Jang A Comparison of Regularization Techniques in Deep Neural Networks Symmetry deep neural networks regularization methods temperature prediction tensor flow library |
title | A Comparison of Regularization Techniques in Deep Neural Networks |
title_full | A Comparison of Regularization Techniques in Deep Neural Networks |
title_fullStr | A Comparison of Regularization Techniques in Deep Neural Networks |
title_full_unstemmed | A Comparison of Regularization Techniques in Deep Neural Networks |
title_short | A Comparison of Regularization Techniques in Deep Neural Networks |
title_sort | comparison of regularization techniques in deep neural networks |
topic | deep neural networks regularization methods temperature prediction tensor flow library |
url | https://www.mdpi.com/2073-8994/10/11/648 |
work_keys_str_mv | AT ismoilovnusrat acomparisonofregularizationtechniquesindeepneuralnetworks AT sungbongjang acomparisonofregularizationtechniquesindeepneuralnetworks AT ismoilovnusrat comparisonofregularizationtechniquesindeepneuralnetworks AT sungbongjang comparisonofregularizationtechniquesindeepneuralnetworks |