Software testing and explainable: A study for evaluating XAI methods on software testing datasets
Explainable AI(XAI) is defined as a set of tools and frameworks used to make humans understand machine learning models which can often be ambiguous. 2 XAI techniques: SHapley Additive exPlanations(SHAP) and Local Interpretable Model-Agnostic Explanations(LIME) are state-of-the-art XAI tools th...
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project (FYP) |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166087 |
_version_ | 1826124823478140928 |
---|---|
author | Tay, Glenn |
author2 | Fan Xiuyi |
author_facet | Fan Xiuyi Tay, Glenn |
author_sort | Tay, Glenn |
collection | NTU |
description | Explainable AI(XAI) is defined as a set of tools and frameworks used to make humans understand
machine learning models which can often be ambiguous. 2 XAI techniques: SHapley Additive
exPlanations(SHAP) and Local Interpretable Model-Agnostic Explanations(LIME) are state-of-the-art
XAI tools that are model-agnostic and can be used to explain any Machine Learning Model. This
project aims to compare the performance of SHAP and LIME in 4 aspects: local intepretability of test
set, global interpretabilty of test set, local interpretability of misclassified observations and global
interpretability of misclassified observations vs correctly-classified observations. This project focuses
on training a Decision Tree Classifier models for Software Defect Prediction using publicly available
datasets, using SHAP and LIME to explain our model’s predictions, and compare between SHAP and
LIME in the 4 aspects mentioned |
first_indexed | 2024-10-01T06:26:40Z |
format | Final Year Project (FYP) |
id | ntu-10356/166087 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T06:26:40Z |
publishDate | 2023 |
publisher | Nanyang Technological University |
record_format | dspace |
spelling | ntu-10356/1660872023-04-21T15:37:13Z Software testing and explainable: A study for evaluating XAI methods on software testing datasets Tay, Glenn Fan Xiuyi School of Computer Science and Engineering xyfan@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Software::Software engineering Explainable AI(XAI) is defined as a set of tools and frameworks used to make humans understand machine learning models which can often be ambiguous. 2 XAI techniques: SHapley Additive exPlanations(SHAP) and Local Interpretable Model-Agnostic Explanations(LIME) are state-of-the-art XAI tools that are model-agnostic and can be used to explain any Machine Learning Model. This project aims to compare the performance of SHAP and LIME in 4 aspects: local intepretability of test set, global interpretabilty of test set, local interpretability of misclassified observations and global interpretability of misclassified observations vs correctly-classified observations. This project focuses on training a Decision Tree Classifier models for Software Defect Prediction using publicly available datasets, using SHAP and LIME to explain our model’s predictions, and compare between SHAP and LIME in the 4 aspects mentioned Bachelor of Engineering (Computer Engineering) 2023-04-21T04:44:22Z 2023-04-21T04:44:22Z 2023 Final Year Project (FYP) Tay, G. (2023). Software testing and explainable: A study for evaluating XAI methods on software testing datasets. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166087 https://hdl.handle.net/10356/166087 en application/pdf Nanyang Technological University |
spellingShingle | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Software::Software engineering Tay, Glenn Software testing and explainable: A study for evaluating XAI methods on software testing datasets |
title | Software testing and explainable: A study for evaluating XAI methods on software testing datasets |
title_full | Software testing and explainable: A study for evaluating XAI methods on software testing datasets |
title_fullStr | Software testing and explainable: A study for evaluating XAI methods on software testing datasets |
title_full_unstemmed | Software testing and explainable: A study for evaluating XAI methods on software testing datasets |
title_short | Software testing and explainable: A study for evaluating XAI methods on software testing datasets |
title_sort | software testing and explainable a study for evaluating xai methods on software testing datasets |
topic | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Software::Software engineering |
url | https://hdl.handle.net/10356/166087 |
work_keys_str_mv | AT tayglenn softwaretestingandexplainableastudyforevaluatingxaimethodsonsoftwaretestingdatasets |