Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources
Backpropagation (BP)-based gradient descent is the general approach to train a neural network with a multilayer perceptron. However, BP is inherently slow in learning, and it sometimes traps at local minima, mainly due to a constant learning rate. This pre-fixed learning rate regularly leads the BP...
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-02-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/11/3/778 |
_version_ | 1827759899236368384 |
---|---|
author | Ali Najem Alkawaz Jeevan Kanesan Anis Salwa Mohd Khairuddin Irfan Anjum Badruddin Sarfaraz Kamangar Mohamed Hussien Maughal Ahmed Ali Baig N. Ameer Ahammad |
author_facet | Ali Najem Alkawaz Jeevan Kanesan Anis Salwa Mohd Khairuddin Irfan Anjum Badruddin Sarfaraz Kamangar Mohamed Hussien Maughal Ahmed Ali Baig N. Ameer Ahammad |
author_sort | Ali Najem Alkawaz |
collection | DOAJ |
description | Backpropagation (BP)-based gradient descent is the general approach to train a neural network with a multilayer perceptron. However, BP is inherently slow in learning, and it sometimes traps at local minima, mainly due to a constant learning rate. This pre-fixed learning rate regularly leads the BP network towards an unsuccessful stochastic steepest descent. Therefore, to overcome the limitation of BP, this work addresses an improved method of training the neural network based on optimal control (OC) theory. State equations in optimal control represent the BP neural network’s weights and biases. Meanwhile, the learning rate is treated as the input control that adapts during the neural training process. The effectiveness of the proposed algorithm is evaluated on several logic gates models such as XOR, AND, and OR, as well as the full adder model. Simulation results demonstrate that the proposed algorithm outperforms the conventional method in terms of improved accuracy in output with a shorter time in training. The training via OC also reduces the local minima trap. The proposed algorithm is almost 40% faster than the steepest descent method, with a marginally improved accuracy of approximately 60%. Consequently, the proposed algorithm is suitable to be applied on devices with limited computation resources, since the proposed algorithm is less complex, thus lowering the circuit’s power consumption. |
first_indexed | 2024-03-11T09:34:02Z |
format | Article |
id | doaj.art-1bebbf50955848cfbd8fe90a13eeaef3 |
institution | Directory Open Access Journal |
issn | 2227-7390 |
language | English |
last_indexed | 2024-03-11T09:34:02Z |
publishDate | 2023-02-01 |
publisher | MDPI AG |
record_format | Article |
series | Mathematics |
spelling | doaj.art-1bebbf50955848cfbd8fe90a13eeaef32023-11-16T17:24:27ZengMDPI AGMathematics2227-73902023-02-0111377810.3390/math11030778Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational ResourcesAli Najem Alkawaz0Jeevan Kanesan1Anis Salwa Mohd Khairuddin2Irfan Anjum Badruddin3Sarfaraz Kamangar4Mohamed Hussien5Maughal Ahmed Ali Baig6N. Ameer Ahammad7Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, MalaysiaMechanical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi ArabiaMechanical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi ArabiaDepartment of Chemistry, Faculty of Science, King Khalid University, P.O. Box 9004, Abha 61413, Saudi ArabiaDepartment of Mechanical Engineering, CMR Technical Campus, Kandlakoya, Medchal Road, Hyderabad 501401, IndiaDepartment of Mathematics, Faculty of Science, University of Tabuk, Tabuk 71491, Saudi ArabiaBackpropagation (BP)-based gradient descent is the general approach to train a neural network with a multilayer perceptron. However, BP is inherently slow in learning, and it sometimes traps at local minima, mainly due to a constant learning rate. This pre-fixed learning rate regularly leads the BP network towards an unsuccessful stochastic steepest descent. Therefore, to overcome the limitation of BP, this work addresses an improved method of training the neural network based on optimal control (OC) theory. State equations in optimal control represent the BP neural network’s weights and biases. Meanwhile, the learning rate is treated as the input control that adapts during the neural training process. The effectiveness of the proposed algorithm is evaluated on several logic gates models such as XOR, AND, and OR, as well as the full adder model. Simulation results demonstrate that the proposed algorithm outperforms the conventional method in terms of improved accuracy in output with a shorter time in training. The training via OC also reduces the local minima trap. The proposed algorithm is almost 40% faster than the steepest descent method, with a marginally improved accuracy of approximately 60%. Consequently, the proposed algorithm is suitable to be applied on devices with limited computation resources, since the proposed algorithm is less complex, thus lowering the circuit’s power consumption.https://www.mdpi.com/2227-7390/11/3/778multilayer neural networkoptimal controlPontryagin minimum principlebackpropagationlogic gates |
spellingShingle | Ali Najem Alkawaz Jeevan Kanesan Anis Salwa Mohd Khairuddin Irfan Anjum Badruddin Sarfaraz Kamangar Mohamed Hussien Maughal Ahmed Ali Baig N. Ameer Ahammad Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources Mathematics multilayer neural network optimal control Pontryagin minimum principle backpropagation logic gates |
title | Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources |
title_full | Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources |
title_fullStr | Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources |
title_full_unstemmed | Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources |
title_short | Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources |
title_sort | training multilayer neural network based on optimal control theory for limited computational resources |
topic | multilayer neural network optimal control Pontryagin minimum principle backpropagation logic gates |
url | https://www.mdpi.com/2227-7390/11/3/778 |
work_keys_str_mv | AT alinajemalkawaz trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources AT jeevankanesan trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources AT anissalwamohdkhairuddin trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources AT irfananjumbadruddin trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources AT sarfarazkamangar trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources AT mohamedhussien trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources AT maughalahmedalibaig trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources AT nameerahammad trainingmultilayerneuralnetworkbasedonoptimalcontroltheoryforlimitedcomputationalresources |