On guaranteed optimal robust explanations for NLP models

We build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a u...

Full description

Bibliographic Details
Main Authors: La Malfa, E, Michelmore, R, Zbrzezny, AM, Paoletti, N, Kwiatkowska, M
Format: Conference item
Language:English
Published: International Joint Conferences on Artificial Intelligence 2021
_version_ 1797075498628546560
author La Malfa, E
Michelmore, R
Zbrzezny, AM
Paoletti, N
Kwiatkowska, M
author_facet La Malfa, E
Michelmore, R
Zbrzezny, AM
Paoletti, N
Kwiatkowska, M
author_sort La Malfa, E
collection OXFORD
description We build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left-out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be configured with different perturbation sets in the embedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to 100 words from SST, Twitter and IMDB datasets, demonstrating the effectiveness of the derived explanations.
first_indexed 2024-03-06T23:51:07Z
format Conference item
id oxford-uuid:72a097e4-c2f4-4dd5-a7b1-f237c07eb810
institution University of Oxford
language English
last_indexed 2024-03-06T23:51:07Z
publishDate 2021
publisher International Joint Conferences on Artificial Intelligence
record_format dspace
spelling oxford-uuid:72a097e4-c2f4-4dd5-a7b1-f237c07eb8102022-03-26T19:51:21ZOn guaranteed optimal robust explanations for NLP modelsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:72a097e4-c2f4-4dd5-a7b1-f237c07eb810EnglishSymplectic Elements International Joint Conferences on Artificial Intelligence2021La Malfa, EMichelmore, RZbrzezny, AMPaoletti, NKwiatkowska, MWe build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left-out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be configured with different perturbation sets in the embedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to 100 words from SST, Twitter and IMDB datasets, demonstrating the effectiveness of the derived explanations.
spellingShingle La Malfa, E
Michelmore, R
Zbrzezny, AM
Paoletti, N
Kwiatkowska, M
On guaranteed optimal robust explanations for NLP models
title On guaranteed optimal robust explanations for NLP models
title_full On guaranteed optimal robust explanations for NLP models
title_fullStr On guaranteed optimal robust explanations for NLP models
title_full_unstemmed On guaranteed optimal robust explanations for NLP models
title_short On guaranteed optimal robust explanations for NLP models
title_sort on guaranteed optimal robust explanations for nlp models
work_keys_str_mv AT lamalfae onguaranteedoptimalrobustexplanationsfornlpmodels
AT michelmorer onguaranteedoptimalrobustexplanationsfornlpmodels
AT zbrzeznyam onguaranteedoptimalrobustexplanationsfornlpmodels
AT paolettin onguaranteedoptimalrobustexplanationsfornlpmodels
AT kwiatkowskam onguaranteedoptimalrobustexplanationsfornlpmodels