Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations
To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions. In this work, we show that such models are nonetheless prone to generating mutually inconsistent explana...
Main Authors: | , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
ACL Anthology
2020
|