Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations
To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions. In this work, we show that such models are nonetheless prone to generating mutually inconsistent explana...
Main Authors: | Camburu, O-M, Shillingford, B, Minervini, P, Lukasiewicz, T, Blunsom, P |
---|---|
Format: | Conference item |
Language: | English |
Published: |
ACL Anthology
2020
|
Similar Items
-
e-SNLI: Natural language inference with natural language explanations
by: Camburu, O, et al.
Published: (2018) -
Explanations for inconsistency-tolerant query answering under existential rules
by: Lukasiewicz, T, et al.
Published: (2020) -
e-ViL: A dataset and benchmark for natural language explanations in vision-language tasks
by: Kayser, M, et al.
Published: (2022) -
Learning from the best: Rationalizing prediction by adversarial information calibration
by: Sha, L, et al.
Published: (2021) -
Mutti's Making Up Your Mind by Robert Mutti
by: Lisa Warenski
Published: (2004-01-01)