Explaining Multiclass Compound Activity Predictions Using Counterfactuals and Shapley Values
Most machine learning (ML) models produce black box predictions that are difficult, if not impossible, to understand. In pharmaceutical research, black box predictions work against the acceptance of ML models for guiding experimental work. Hence, there is increasing interest in approaches for explai...
Main Authors: | Alec Lamens, Jürgen Bajorath |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-07-01
|
Series: | Molecules |
Subjects: | |
Online Access: | https://www.mdpi.com/1420-3049/28/14/5601 |
Similar Items
-
Explaining Accurate Predictions of Multitarget Compounds with Machine Learning Models Derived for Individual Targets
by: Alec Lamens, et al.
Published: (2023-01-01) -
Improving users' mental model with attention‐directed counterfactual edits
by: Kamran Alipour, et al.
Published: (2021-12-01) -
A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence
by: Ilia Stepin, et al.
Published: (2021-01-01) -
Counterfactual Models for Fair and Adequate Explanations
by: Nicholas Asher, et al.
Published: (2022-03-01) -
A Multi–Criteria Approach for Selecting an Explanation from the Set of Counterfactuals Produced by an Ensemble of Explainers
by: Stepka Ignacy, et al.
Published: (2024-03-01)