Reducing Confusion in Active Learning for Part-Of-Speech Tagging

AbstractActive learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed...

Full description

Bibliographic Details
Main Authors: Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, Graham Neubig
Format: Article
Language:English
Published: The MIT Press 2021-01-01
Series:Transactions of the Association for Computational Linguistics
Online Access:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00350/97781/Reducing-Confusion-in-Active-Learning-for-Part-Of
_version_ 1811234748203270144
author Aditi Chaudhary
Antonios Anastasopoulos
Zaid Sheikh
Graham Neubig
author_facet Aditi Chaudhary
Antonios Anastasopoulos
Zaid Sheikh
Graham Neubig
author_sort Aditi Chaudhary
collection DOAJ
description AbstractActive learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances that maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution. The code is publicly released here.1
first_indexed 2024-04-12T11:41:00Z
format Article
id doaj.art-98d044bf96054b06aa494cf3d95b90fa
institution Directory Open Access Journal
issn 2307-387X
language English
last_indexed 2024-04-12T11:41:00Z
publishDate 2021-01-01
publisher The MIT Press
record_format Article
series Transactions of the Association for Computational Linguistics
spelling doaj.art-98d044bf96054b06aa494cf3d95b90fa2022-12-22T03:34:37ZengThe MIT PressTransactions of the Association for Computational Linguistics2307-387X2021-01-01911610.1162/tacl_a_00350Reducing Confusion in Active Learning for Part-Of-Speech TaggingAditi Chaudhary0Antonios Anastasopoulos1Zaid Sheikh2Graham Neubig3Language Technologies Institute, Carnegie Mellon University, United States. aschaudh@cs.cmu.eduDepartment of Computer Science, George Mason University, United States. antonis@gmu.eduLanguage Technologies Institute, Carnegie Mellon University, United States. zsheikh@cs.cmu.eduLanguage Technologies Institute, Carnegie Mellon University, United States. gneubig@cs.cmu.edu AbstractActive learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances that maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution. The code is publicly released here.1https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00350/97781/Reducing-Confusion-in-Active-Learning-for-Part-Of
spellingShingle Aditi Chaudhary
Antonios Anastasopoulos
Zaid Sheikh
Graham Neubig
Reducing Confusion in Active Learning for Part-Of-Speech Tagging
Transactions of the Association for Computational Linguistics
title Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_full Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_fullStr Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_full_unstemmed Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_short Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_sort reducing confusion in active learning for part of speech tagging
url https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00350/97781/Reducing-Confusion-in-Active-Learning-for-Part-Of
work_keys_str_mv AT aditichaudhary reducingconfusioninactivelearningforpartofspeechtagging
AT antoniosanastasopoulos reducingconfusioninactivelearningforpartofspeechtagging
AT zaidsheikh reducingconfusioninactivelearningforpartofspeechtagging
AT grahamneubig reducingconfusioninactivelearningforpartofspeechtagging