Human-Centered Efficient Explanation on Intrusion Detection Prediction
The methodology for constructing intrusion detection systems and improving existing systems is being actively studied in order to detect harmful data within large-capacity network data. The most common approach is to use AI systems to adapt to unanticipated threats and improve system performance. Ho...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-07-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/11/13/2082 |
_version_ | 1797480347153203200 |
---|---|
author | Yongsoo Lee Eungyu Lee Taejin Lee |
author_facet | Yongsoo Lee Eungyu Lee Taejin Lee |
author_sort | Yongsoo Lee |
collection | DOAJ |
description | The methodology for constructing intrusion detection systems and improving existing systems is being actively studied in order to detect harmful data within large-capacity network data. The most common approach is to use AI systems to adapt to unanticipated threats and improve system performance. However, most studies aim to improve performance, and performance-oriented systems tend to be composed of black box models, whose internal working is complex. In the field of security control, analysts strive for interpretation and response based on information from given data, system prediction results, and knowledge. Consequently, performance-oriented systems suffer from a lack of interpretability owing to the lack of system prediction results and internal process information. The recent social climate also demands a responsible system rather than a performance-focused one. This research aims to ensure understanding and interpretation by providing interpretability for AI systems in multiple classification environments that can detect various attacks. In particular, the better the performance, the more complex and less transparent the model and the more limited the area that the analyst can understand, the lower the processing efficiency accordingly. The approach provided in this research is an intrusion detection methodology that uses FOS based on SHAP values to evaluate if the prediction result is suspicious and selects the optimal rule from the transparent model to improve the explanation. |
first_indexed | 2024-03-09T21:59:23Z |
format | Article |
id | doaj.art-2d5781dfa41e48989b663ec049df81ef |
institution | Directory Open Access Journal |
issn | 2079-9292 |
language | English |
last_indexed | 2024-03-09T21:59:23Z |
publishDate | 2022-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Electronics |
spelling | doaj.art-2d5781dfa41e48989b663ec049df81ef2023-11-23T19:52:35ZengMDPI AGElectronics2079-92922022-07-011113208210.3390/electronics11132082Human-Centered Efficient Explanation on Intrusion Detection PredictionYongsoo Lee0Eungyu Lee1Taejin Lee2Department of Information Security, Hoseo University, Asan 31499, KoreaDepartment of Information Security, Hoseo University, Asan 31499, KoreaDepartment of Information Security, Hoseo University, Asan 31499, KoreaThe methodology for constructing intrusion detection systems and improving existing systems is being actively studied in order to detect harmful data within large-capacity network data. The most common approach is to use AI systems to adapt to unanticipated threats and improve system performance. However, most studies aim to improve performance, and performance-oriented systems tend to be composed of black box models, whose internal working is complex. In the field of security control, analysts strive for interpretation and response based on information from given data, system prediction results, and knowledge. Consequently, performance-oriented systems suffer from a lack of interpretability owing to the lack of system prediction results and internal process information. The recent social climate also demands a responsible system rather than a performance-focused one. This research aims to ensure understanding and interpretation by providing interpretability for AI systems in multiple classification environments that can detect various attacks. In particular, the better the performance, the more complex and less transparent the model and the more limited the area that the analyst can understand, the lower the processing efficiency accordingly. The approach provided in this research is an intrusion detection methodology that uses FOS based on SHAP values to evaluate if the prediction result is suspicious and selects the optimal rule from the transparent model to improve the explanation.https://www.mdpi.com/2079-9292/11/13/2082AI-based security for networksintrusion detection systemrule-based learninginterpretabilityexplainabilitysuspicious data |
spellingShingle | Yongsoo Lee Eungyu Lee Taejin Lee Human-Centered Efficient Explanation on Intrusion Detection Prediction Electronics AI-based security for networks intrusion detection system rule-based learning interpretability explainability suspicious data |
title | Human-Centered Efficient Explanation on Intrusion Detection Prediction |
title_full | Human-Centered Efficient Explanation on Intrusion Detection Prediction |
title_fullStr | Human-Centered Efficient Explanation on Intrusion Detection Prediction |
title_full_unstemmed | Human-Centered Efficient Explanation on Intrusion Detection Prediction |
title_short | Human-Centered Efficient Explanation on Intrusion Detection Prediction |
title_sort | human centered efficient explanation on intrusion detection prediction |
topic | AI-based security for networks intrusion detection system rule-based learning interpretability explainability suspicious data |
url | https://www.mdpi.com/2079-9292/11/13/2082 |
work_keys_str_mv | AT yongsoolee humancenteredefficientexplanationonintrusiondetectionprediction AT eungyulee humancenteredefficientexplanationonintrusiondetectionprediction AT taejinlee humancenteredefficientexplanationonintrusiondetectionprediction |