Development of a deep learning-based approach for remaining useful life prediction of turbofan engine

In recent times, there has been a growing interest in predictive maintenance for turbofan engines as it has demonstrated significant potential in improving the maintenance process. The core concept behind predictive maintenance is to predict the remaining useful life (RUL) of a system. With accurate...

Full description

Bibliographic Details
Main Author: Ng, Yi Hong
Other Authors: Chen Chun-Hsien
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166934
Description
Summary:In recent times, there has been a growing interest in predictive maintenance for turbofan engines as it has demonstrated significant potential in improving the maintenance process. The core concept behind predictive maintenance is to predict the remaining useful life (RUL) of a system. With accurate RUL predictions, the maintenance process can be optimised in terms of the maintenance frequency and resources consumed. This leads to considerable cost-savings without compromising the reliability and safety of the system. Data-driven methods, especially deep learning (DL) models, contribute significantly to the popularity of predictive maintenance as they have achieved unprecedented prediction accuracy. In particular, Graph Neural Network (GNN) has gained rapid interest recently due to its impressive predictive performance in complex applications. Hence, this study aims to develop a novel state-of-the-art DL framework that uses a GNN-based algorithm to predict the RUL of turbofan engines. The proposed framework includes data processing techniques needed to convert the raw multivariate time series data into compatible inputs for the Long Short-Term Memory (LSTM)-based and GNN-based model. Survival analysis models are also used to extract useful features during data processing. These processed inputs are fed into a novel DL model which is based on LSTM, GNN, attention mechanism, and CatBoost. This model adopts a two-step training strategy where the DL components are trained first, followed by the CatBoost model. In an end-to-end fashion, the DL components are trained using an asymmetric loss function (i.e., LINEX). Subsequently, the CatBoost model replaces the last two fully-connected layers and trains on the features extracted by the DL components. The repeated experiments have shown that the proposed framework has outperformed many state-of-the-art DL models. The ablation study showed that the key components of the proposed model, including the GNN-based component, are significant contributors to the model performance. Thus, the proposed model serves as a strong benchmark for future state-of-the-art graph-based DL models.