Deep Q‐learning recommender algorithm with update policy for a real steam turbine system

Abstract In modern industrial systems, diagnosing faults in time and using the best methods becomes increasingly crucial. It is possible to fail a system or to waste resources if faults are not detected or are detected late. Machine learning and deep learning (DL) have proposed various methods for d...

Full description

Bibliographic Details
Main Authors: Mohammad Hossein Modirrousta, Mahdi Aliyari Shoorehdeli, Mostafa Yari, Arash Ghahremani
Format: Article
Language:English
Published: Wiley 2023-09-01
Series:IET Collaborative Intelligent Manufacturing
Subjects:
Online Access:https://doi.org/10.1049/cim2.12081
Description
Summary:Abstract In modern industrial systems, diagnosing faults in time and using the best methods becomes increasingly crucial. It is possible to fail a system or to waste resources if faults are not detected or are detected late. Machine learning and deep learning (DL) have proposed various methods for data‐based fault diagnosis, and the authors are looking for the most reliable and practical ones. A framework based on DL and reinforcement learning (RL) is developed for fault detection. The authors have utilised two algorithms in their work: Q‐Learning and Soft Q‐Learning. Reinforcement learning frameworks frequently include efficient algorithms for policy updates, including Q‐learning. These algorithms optimise the policy based on the predictions and rewards, resulting in more efficient updates and quicker convergence. The authors can increase accuracy, overcome data imbalance, and better predict future defects by updating the RL policy when new data is received. By applying their method, an increase of 3%–4% in all evaluation metrics by updating policy, an improvement in prediction speed, and an increase of 3%–6% in all evaluation metrics compared to a typical backpropagation multi‐layer neural network prediction with comparable parameters is observed. In addition, the Soft Q‐learning algorithm yields better outcomes compared to Q‐learning.
ISSN:2516-8398