Adapting to Learner Errors with Minimal Supervision

This article considers the problem of correcting errors made by English as a Second Language writers from a machine learning perspective, and addresses an important issue of developing an appropriate training paradigm for the task, one that accounts for error patterns of non-native writers using min...

Full description

Bibliographic Details
Main Authors: Alla Rozovskaya, Dan Roth, Mark Sammons
Format: Article
Language:English
Published: The MIT Press 2017-09-01
Series:Computational Linguistics
Online Access:http://dx.doi.org/10.1162/coli_a_00299
_version_ 1797795421087596544
author Alla Rozovskaya
Dan Roth
Mark Sammons
author_facet Alla Rozovskaya
Dan Roth
Mark Sammons
author_sort Alla Rozovskaya
collection DOAJ
description This article considers the problem of correcting errors made by English as a Second Language writers from a machine learning perspective, and addresses an important issue of developing an appropriate training paradigm for the task, one that accounts for error patterns of non-native writers using minimal supervision. Existing training approaches present a trade-off between large amounts of cheap data offered by the native-trained models and additional knowledge of learner error patterns provided by the more expensive method of training on annotated learner data. We propose a novel training approach that draws on the strengths offered by the two standard training paradigms—of training either on native or on annotated learner data—and that outperforms both of these standard methods. Using the key observation that parameters relating to error regularities exhibited by non-native writers are relatively simple, we develop models that can incorporate knowledge about error regularities based on a small annotated sample but that are otherwise trained on native English data. The key contribution of this article is the introduction and analysis of two methods for adapting the learned models to error patterns of non-native writers; one method that applies to generative classifiers and a second that applies to discriminative classifiers. Both methods demonstrated state-of-the-art performance in several text correction competitions. In particular, the Illinois system that implements these methods ranked at the top in two recent CoNLL shared tasks on error correction. <jats:sup>1</jats:sup> We conduct further evaluation of the proposed approaches studying the effect of using error data from speakers of the same native language, languages that are closely related linguistically, and unrelated languages. <jats:sup>2</jats:sup>
first_indexed 2024-03-13T03:17:41Z
format Article
id doaj.art-fe92deaefb584ad79ac8fa83d4f6efd4
institution Directory Open Access Journal
issn 1530-9312
language English
last_indexed 2024-03-13T03:17:41Z
publishDate 2017-09-01
publisher The MIT Press
record_format Article
series Computational Linguistics
spelling doaj.art-fe92deaefb584ad79ac8fa83d4f6efd42023-06-25T14:50:05ZengThe MIT PressComputational Linguistics1530-93122017-09-0143410.1162/coli_a_00299Adapting to Learner Errors with Minimal SupervisionAlla RozovskayaDan RothMark SammonsThis article considers the problem of correcting errors made by English as a Second Language writers from a machine learning perspective, and addresses an important issue of developing an appropriate training paradigm for the task, one that accounts for error patterns of non-native writers using minimal supervision. Existing training approaches present a trade-off between large amounts of cheap data offered by the native-trained models and additional knowledge of learner error patterns provided by the more expensive method of training on annotated learner data. We propose a novel training approach that draws on the strengths offered by the two standard training paradigms—of training either on native or on annotated learner data—and that outperforms both of these standard methods. Using the key observation that parameters relating to error regularities exhibited by non-native writers are relatively simple, we develop models that can incorporate knowledge about error regularities based on a small annotated sample but that are otherwise trained on native English data. The key contribution of this article is the introduction and analysis of two methods for adapting the learned models to error patterns of non-native writers; one method that applies to generative classifiers and a second that applies to discriminative classifiers. Both methods demonstrated state-of-the-art performance in several text correction competitions. In particular, the Illinois system that implements these methods ranked at the top in two recent CoNLL shared tasks on error correction. <jats:sup>1</jats:sup> We conduct further evaluation of the proposed approaches studying the effect of using error data from speakers of the same native language, languages that are closely related linguistically, and unrelated languages. <jats:sup>2</jats:sup>http://dx.doi.org/10.1162/coli_a_00299
spellingShingle Alla Rozovskaya
Dan Roth
Mark Sammons
Adapting to Learner Errors with Minimal Supervision
Computational Linguistics
title Adapting to Learner Errors with Minimal Supervision
title_full Adapting to Learner Errors with Minimal Supervision
title_fullStr Adapting to Learner Errors with Minimal Supervision
title_full_unstemmed Adapting to Learner Errors with Minimal Supervision
title_short Adapting to Learner Errors with Minimal Supervision
title_sort adapting to learner errors with minimal supervision
url http://dx.doi.org/10.1162/coli_a_00299
work_keys_str_mv AT allarozovskaya adaptingtolearnererrorswithminimalsupervision
AT danroth adaptingtolearnererrorswithminimalsupervision
AT marksammons adaptingtolearnererrorswithminimalsupervision