Achievable Minimally-Contrastive Counterfactual Explanations

Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decli...

Full description

Bibliographic Details
Main Authors: Hosein Barzekar, Susan McRoy
Format: Article
Language:English
Published: MDPI AG 2023-08-01
Series:Machine Learning and Knowledge Extraction
Subjects:
Online Access:https://www.mdpi.com/2504-4990/5/3/48
_version_ 1797579072672366592
author Hosein Barzekar
Susan McRoy
author_facet Hosein Barzekar
Susan McRoy
author_sort Hosein Barzekar
collection DOAJ
description Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
first_indexed 2024-03-10T22:31:47Z
format Article
id doaj.art-273cd0e28cd445f0be12e92f597a1d98
institution Directory Open Access Journal
issn 2504-4990
language English
last_indexed 2024-03-10T22:31:47Z
publishDate 2023-08-01
publisher MDPI AG
record_format Article
series Machine Learning and Knowledge Extraction
spelling doaj.art-273cd0e28cd445f0be12e92f597a1d982023-11-19T11:41:41ZengMDPI AGMachine Learning and Knowledge Extraction2504-49902023-08-015392293610.3390/make5030048Achievable Minimally-Contrastive Counterfactual ExplanationsHosein Barzekar0Susan McRoy1Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USADepartment of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USADecision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.https://www.mdpi.com/2504-4990/5/3/48machine learninginterpretabilityfeasibilitycounterfactual and contrastive explanation
spellingShingle Hosein Barzekar
Susan McRoy
Achievable Minimally-Contrastive Counterfactual Explanations
Machine Learning and Knowledge Extraction
machine learning
interpretability
feasibility
counterfactual and contrastive explanation
title Achievable Minimally-Contrastive Counterfactual Explanations
title_full Achievable Minimally-Contrastive Counterfactual Explanations
title_fullStr Achievable Minimally-Contrastive Counterfactual Explanations
title_full_unstemmed Achievable Minimally-Contrastive Counterfactual Explanations
title_short Achievable Minimally-Contrastive Counterfactual Explanations
title_sort achievable minimally contrastive counterfactual explanations
topic machine learning
interpretability
feasibility
counterfactual and contrastive explanation
url https://www.mdpi.com/2504-4990/5/3/48
work_keys_str_mv AT hoseinbarzekar achievableminimallycontrastivecounterfactualexplanations
AT susanmcroy achievableminimallycontrastivecounterfactualexplanations