Summary: | Explainable AI(XAI) is defined as a set of tools and frameworks used to make humans understand
machine learning models which can often be ambiguous. 2 XAI techniques: SHapley Additive
exPlanations(SHAP) and Local Interpretable Model-Agnostic Explanations(LIME) are state-of-the-art
XAI tools that are model-agnostic and can be used to explain any Machine Learning Model. This
project aims to compare the performance of SHAP and LIME in 4 aspects: local intepretability of test
set, global interpretabilty of test set, local interpretability of misclassified observations and global
interpretability of misclassified observations vs correctly-classified observations. This project focuses
on training a Decision Tree Classifier models for Software Defect Prediction using publicly available
datasets, using SHAP and LIME to explain our model’s predictions, and compare between SHAP and
LIME in the 4 aspects mentioned
|