An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction
Prognosis and health management depend on sufficient prior knowledge of the degradation process of critical components to predict the remaining useful life. This task is composed of two phases: learning and prediction. The first phase uses the available information to learn the system’s behavior. Th...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-05-01
|
Series: | Aerospace |
Subjects: | |
Online Access: | https://www.mdpi.com/2226-4310/10/5/474 |
_version_ | 1797601604462968832 |
---|---|
author | Genane Youness Adam Aalah |
author_facet | Genane Youness Adam Aalah |
author_sort | Genane Youness |
collection | DOAJ |
description | Prognosis and health management depend on sufficient prior knowledge of the degradation process of critical components to predict the remaining useful life. This task is composed of two phases: learning and prediction. The first phase uses the available information to learn the system’s behavior. The second phase predicts future behavior based on the available information of the system and estimates its remaining lifetime. Deep learning approaches achieve good prognostic performance but usually suffer from a high computational load and a lack of interpretability. Complex feature extraction models do not solve this problem, as they lose information in the learning phase and thus have a poor prognosis for the remaining lifetime. A new prepossessing approach is used with feature clustering to address this issue. It allows for restructuring the data into homogeneous groups strongly related to each other using a simple architecture of the LSTM model. It is advantageous in terms of learning time and the possibility of using limited computational capabilities. Then, we focus on the interpretability of deep learning prognosis using Explainable AI to achieve interpretable RUL prediction. The proposed approach offers model improvement and enhanced interpretability, enabling a better understanding of feature contributions. Experimental results on the available NASA C-MAPSS dataset show the performance of the proposed model compared to other common methods. |
first_indexed | 2024-03-11T04:02:42Z |
format | Article |
id | doaj.art-6fb6c0b72f0e42ccb6ba46349943c98c |
institution | Directory Open Access Journal |
issn | 2226-4310 |
language | English |
last_indexed | 2024-03-11T04:02:42Z |
publishDate | 2023-05-01 |
publisher | MDPI AG |
record_format | Article |
series | Aerospace |
spelling | doaj.art-6fb6c0b72f0e42ccb6ba46349943c98c2023-11-18T00:00:50ZengMDPI AGAerospace2226-43102023-05-0110547410.3390/aerospace10050474An Explainable Artificial Intelligence Approach for Remaining Useful Life PredictionGenane Youness0Adam Aalah1Laboratoire LINEACT CESI, IDFC, 92000 Nanterre, FranceInstitut Polytechnique de Paris, 91120 Palaiseau, FrancePrognosis and health management depend on sufficient prior knowledge of the degradation process of critical components to predict the remaining useful life. This task is composed of two phases: learning and prediction. The first phase uses the available information to learn the system’s behavior. The second phase predicts future behavior based on the available information of the system and estimates its remaining lifetime. Deep learning approaches achieve good prognostic performance but usually suffer from a high computational load and a lack of interpretability. Complex feature extraction models do not solve this problem, as they lose information in the learning phase and thus have a poor prognosis for the remaining lifetime. A new prepossessing approach is used with feature clustering to address this issue. It allows for restructuring the data into homogeneous groups strongly related to each other using a simple architecture of the LSTM model. It is advantageous in terms of learning time and the possibility of using limited computational capabilities. Then, we focus on the interpretability of deep learning prognosis using Explainable AI to achieve interpretable RUL prediction. The proposed approach offers model improvement and enhanced interpretability, enabling a better understanding of feature contributions. Experimental results on the available NASA C-MAPSS dataset show the performance of the proposed model compared to other common methods.https://www.mdpi.com/2226-4310/10/5/474prognostic and health managementremaining useful lifefeature clusteringExplainable Artificial Intelligence (XAI) |
spellingShingle | Genane Youness Adam Aalah An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction Aerospace prognostic and health management remaining useful life feature clustering Explainable Artificial Intelligence (XAI) |
title | An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction |
title_full | An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction |
title_fullStr | An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction |
title_full_unstemmed | An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction |
title_short | An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction |
title_sort | explainable artificial intelligence approach for remaining useful life prediction |
topic | prognostic and health management remaining useful life feature clustering Explainable Artificial Intelligence (XAI) |
url | https://www.mdpi.com/2226-4310/10/5/474 |
work_keys_str_mv | AT genaneyouness anexplainableartificialintelligenceapproachforremainingusefullifeprediction AT adamaalah anexplainableartificialintelligenceapproachforremainingusefullifeprediction AT genaneyouness explainableartificialintelligenceapproachforremainingusefullifeprediction AT adamaalah explainableartificialintelligenceapproachforremainingusefullifeprediction |