CausaLM: Causal Model Explanation Through Counterfactual Language Models
AbstractUnderstanding predictions made by deep neural networks is notoriously difficult, but also crucial to their dissemination. As all machine learning–based methods, they are as good as their training data, and can also capture unwanted biases. While there are tools that can help...
Main Authors: | Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2021-07-01
|
Series: | Computational Linguistics |
Online Access: | https://direct.mit.edu/coli/article/47/2/333/98518/CausaLM-Causal-Model-Explanation-Through |
Similar Items
-
Predicting In-Game Actions from Interviews of NBA Players
by: Nadav Oved, et al.
Published: (2020-07-01) -
Model Compression for Domain Adaptation through Causal Effect Estimation
by: Guy Rotman, et al.
Published: (2021-01-01) -
Counterfactual Models for Fair and Adequate Explanations
by: Nicholas Asher, et al.
Published: (2022-03-01) -
Model-Agnostic Counterfactual Explanations in Credit Scoring
by: Xolani Dastile, et al.
Published: (2022-01-01) -
PADA: Example-based Prompt Learning for on-the-fly Adaptation to
Unseen Domains
by: Eyal Ben-David, et al.
Published: (2022-01-01)