Shapley Values for Feature Selection: The Good, the Bad, and the Axioms
The Shapley value has become popular in the Explainable AI (XAI) literature, thanks, to a large extent, to a solid theoretical foundation, including four “favourable and fair” axioms for attribution in transferable utility games. The Shapley value is probably the only solution...
Main Authors: | Daniel Fryer, Inga Strumke, Hien Nguyen |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9565902/ |
Similar Items
-
Model independent feature attributions: Shapley values that uncover non-linear dependencies
by: Daniel Vidali Fryer, et al.
Published: (2021-06-01) -
Exact Shapley values for local and model-true explanations of decision tree ensembles
by: Thomas W. Campbell, et al.
Published: (2022-09-01) -
Feature Selection Using Approximated High-Order Interaction Components of the Shapley Value for Boosted Tree Classifier
by: Carlin Chun Fai Chu, et al.
Published: (2020-01-01) -
Explaining multivariate molecular diagnostic tests via Shapley values
by: Joanna Roder, et al.
Published: (2021-07-01) -
A Multi-Objective Multi-Label Feature Selection Algorithm Based on Shapley Value
by: Hongbin Dong, et al.
Published: (2021-08-01)