Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve

AbstractNamed entity recognition systems achieve remarkable performance on domains such as English news. It is natural to ask: What are these models actually learning to achieve this? Are they merely memorizing the names themselves? Or are they capable of interpreting the text and in...

Full description

Bibliographic Details
Main Authors: Oshin Agarwal, Yinfei Yang, Byron C. Wallace, Ani Nenkova
Format: Article
Language:English
Published: The MIT Press 2021-03-01
Series:Computational Linguistics
Online Access:https://direct.mit.edu/coli/article/47/1/117/97335/Interpretability-Analysis-for-Named-Entity
_version_ 1811251413206958080
author Oshin Agarwal
Yinfei Yang
Byron C. Wallace
Ani Nenkova
author_facet Oshin Agarwal
Yinfei Yang
Byron C. Wallace
Ani Nenkova
author_sort Oshin Agarwal
collection DOAJ
description AbstractNamed entity recognition systems achieve remarkable performance on domains such as English news. It is natural to ask: What are these models actually learning to achieve this? Are they merely memorizing the names themselves? Or are they capable of interpreting the text and inferring the correct entity type from the linguistic context? We examine these questions by contrasting the performance of several variants of architectures for named entity recognition, with some provided only representations of the context as features. We experiment with GloVe-based BiLSTM-CRF as well as BERT. We find that context does influence predictions, but the main factor driving high performance is learning the named tokens themselves. Furthermore, we find that BERT is not always better at recognizing predictive contexts compared to a BiLSTM-CRF model. We enlist human annotators to evaluate the feasibility of inferring entity types from context alone and find that humans are also mostly unable to infer entity types for the majority of examples on which the context-only system made errors. However, there is room for improvement: A system should be able to recognize any named entity in a predictive context correctly and our experiments indicate that current systems may be improved by such capability. Our human study also revealed that systems and humans do not always learn the same contextual clues, and context-only systems are sometimes correct even when humans fail to recognize the entity type from the context. Finally, we find that one issue contributing to model errors is the use of “entangled” representations that encode both contextual and local token information into a single vector, which can obscure clues. Our results suggest that designing models that explicitly operate over representations of local inputs and context, respectively, may in some cases improve performance. In light of these and related findings, we highlight directions for future work.
first_indexed 2024-04-12T16:19:14Z
format Article
id doaj.art-78ad4f62982946e3bf91be0618ac8b5a
institution Directory Open Access Journal
issn 0891-2017
1530-9312
language English
last_indexed 2024-04-12T16:19:14Z
publishDate 2021-03-01
publisher The MIT Press
record_format Article
series Computational Linguistics
spelling doaj.art-78ad4f62982946e3bf91be0618ac8b5a2022-12-22T03:25:37ZengThe MIT PressComputational Linguistics0891-20171530-93122021-03-0147111714010.1162/coli_a_00397Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can ImproveOshin Agarwal0Yinfei Yang1Byron C. Wallace2Ani Nenkova3University of Pennsylvania, Department of Computer and Information Science. oagarwal@seas.upenn.eduGoogle Research. yinfeiy@google.comNortheastern University, Khoury College of Computer Sciences. b.wallace@northeastern.eduUniversity of Pennsylvania, Department of Computer and Information Science. nenkova@seas.upenn.edu AbstractNamed entity recognition systems achieve remarkable performance on domains such as English news. It is natural to ask: What are these models actually learning to achieve this? Are they merely memorizing the names themselves? Or are they capable of interpreting the text and inferring the correct entity type from the linguistic context? We examine these questions by contrasting the performance of several variants of architectures for named entity recognition, with some provided only representations of the context as features. We experiment with GloVe-based BiLSTM-CRF as well as BERT. We find that context does influence predictions, but the main factor driving high performance is learning the named tokens themselves. Furthermore, we find that BERT is not always better at recognizing predictive contexts compared to a BiLSTM-CRF model. We enlist human annotators to evaluate the feasibility of inferring entity types from context alone and find that humans are also mostly unable to infer entity types for the majority of examples on which the context-only system made errors. However, there is room for improvement: A system should be able to recognize any named entity in a predictive context correctly and our experiments indicate that current systems may be improved by such capability. Our human study also revealed that systems and humans do not always learn the same contextual clues, and context-only systems are sometimes correct even when humans fail to recognize the entity type from the context. Finally, we find that one issue contributing to model errors is the use of “entangled” representations that encode both contextual and local token information into a single vector, which can obscure clues. Our results suggest that designing models that explicitly operate over representations of local inputs and context, respectively, may in some cases improve performance. In light of these and related findings, we highlight directions for future work.https://direct.mit.edu/coli/article/47/1/117/97335/Interpretability-Analysis-for-Named-Entity
spellingShingle Oshin Agarwal
Yinfei Yang
Byron C. Wallace
Ani Nenkova
Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
Computational Linguistics
title Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
title_full Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
title_fullStr Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
title_full_unstemmed Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
title_short Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve
title_sort interpretability analysis for named entity recognition to understand system predictions and how they can improve
url https://direct.mit.edu/coli/article/47/1/117/97335/Interpretability-Analysis-for-Named-Entity
work_keys_str_mv AT oshinagarwal interpretabilityanalysisfornamedentityrecognitiontounderstandsystempredictionsandhowtheycanimprove
AT yinfeiyang interpretabilityanalysisfornamedentityrecognitiontounderstandsystempredictionsandhowtheycanimprove
AT byroncwallace interpretabilityanalysisfornamedentityrecognitiontounderstandsystempredictionsandhowtheycanimprove
AT aninenkova interpretabilityanalysisfornamedentityrecognitiontounderstandsystempredictionsandhowtheycanimprove