On guaranteed optimal robust explanations for NLP models
We build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a u...
Main Authors: | La Malfa, E, Michelmore, R, Zbrzezny, AM, Paoletti, N, Kwiatkowska, M |
---|---|
פורמט: | Conference item |
שפה: | English |
יצא לאור: |
International Joint Conferences on Artificial Intelligence
2021
|
פריטים דומים
-
Statistical guarantees for the robustness of Bayesian neural networks
מאת: Cardelli, L, et al.
יצא לאור: (2019) -
Towards Faithful Model Explanation in NLP: A Survey
מאת: Qing Lyu, et al.
יצא לאור: (2024-07-01) -
Explanation-Based Human Debugging of NLP Models: A Survey
מאת: Piyawat Lertvittayakumjorn, et al.
יצא לאור: (2021-01-01) -
Robustness guarantees for deep neural networks on videos
מאת: Kwiatkowska, M, et al.
יצא לאור: (2020) -
Safety and robustness for deep learning with provable guarantees (keynote)
מאת: Kwiatkowska, M
יצא לאור: (2019)