PID controller‐based adaptive gradient optimizer for deep neural networks

Abstract Due to improper selection of gradient update direction or learning rate, SGD optimization algorithms for deep learning suffer from oscillation and slow convergence. Although Adam algorithm can adaptively adjust the update direction and learning rate at the same time, it still has the oversh...

Full description

Bibliographic Details
Main Authors: Mingjun Dai, Zelong Zhang, Xiong Lai, Xiaohui Lin, Hui Wang
Format: Article
Language:English
Published: Wiley 2023-10-01
Series:IET Control Theory & Applications
Subjects:
Online Access:https://doi.org/10.1049/cth2.12404
Description
Summary:Abstract Due to improper selection of gradient update direction or learning rate, SGD optimization algorithms for deep learning suffer from oscillation and slow convergence. Although Adam algorithm can adaptively adjust the update direction and learning rate at the same time, it still has the overshoot phenomenon, and hence suffers from wasting computing resources and slow convergence. In this work, the PID controller from the feedback control area is borrowed to re‐express the adaptive optimization algorithm (the Adam optimization algorithm is derived into the integral I component form) of deep learning. In order to alleviate the overshoot phenomenon and hence speed up the convergence of Adam, a complete adaptive PID optimizer (adaptive‐PID) is proposed by incorporating the proportional P and derivative D component. Extensive experiments on standard data sets verify that the proposed adaptive‐PID algorithm significantly outperforms Adam algorithm in terms of convergence rate and accuracy.
ISSN:1751-8644
1751-8652