To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP

AbstractData-hungry deep neural networks have established themselves as the de facto standard for many NLP tasks, including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind their statistical counterpa...

Full description

Bibliographic Details
Main Author: Gözde Gül Şahin
Format: Article
Language:English
Published: The MIT Press 2022-01-01
Series:Computational Linguistics
Online Access:https://direct.mit.edu/coli/article/48/1/5/108844/To-Augment-or-Not-to-Augment-A-Comparative-Study
_version_ 1828224798961958912
author Gözde Gül Şahin
author_facet Gözde Gül Şahin
author_sort Gözde Gül Şahin
collection DOAJ
description AbstractData-hungry deep neural networks have established themselves as the de facto standard for many NLP tasks, including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind their statistical counterparts in low-resource scenarios. One methodology to counterattack this problem is text augmentation, that is, generating new synthetic training data points from existing data. Although NLP has recently witnessed several new textual augmentation techniques, the field still lacks a systematic performance analysis on a diverse set of languages and sequence tagging tasks. To fill this gap, we investigate three categories of text augmentation methodologies that perform changes on the syntax (e.g., cropping sub-sentences), token (e.g., random word insertion), and character (e.g., character swapping) levels. We systematically compare the methods on part-of-speech tagging, dependency parsing, and semantic role labeling for a diverse set of language families using various models, including the architectures that rely on pretrained multilingual contextualized language models such as mBERT. Augmentation most significantly improves dependency parsing, followed by part-of-speech tagging and semantic role labeling. We find the experimented techniques to be effective on morphologically rich languages in general rather than analytic languages such as Vietnamese. Our results suggest that the augmentation techniques can further improve over strong baselines based on mBERT, especially for dependency parsing. We identify the character-level methods as the most consistent performers, while synonym replacement and syntactic augmenters provide inconsistent improvements. Finally, we discuss that the results most heavily depend on the task, language pair (e.g., syntactic-level techniques mostly benefit higher-level tasks and morphologically richer languages), and model type (e.g., token-level augmentation provides significant improvements for BPE, while character-level ones give generally higher scores for char and mBERT based models).
first_indexed 2024-04-12T17:23:43Z
format Article
id doaj.art-a64c98b66bd34ddc94a8b94f05066443
institution Directory Open Access Journal
issn 0891-2017
1530-9312
language English
last_indexed 2024-04-12T17:23:43Z
publishDate 2022-01-01
publisher The MIT Press
record_format Article
series Computational Linguistics
spelling doaj.art-a64c98b66bd34ddc94a8b94f050664432022-12-22T03:23:24ZengThe MIT PressComputational Linguistics0891-20171530-93122022-01-0148154210.1162/coli_a_00425To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLPGözde Gül Şahin0Koç University, Computer Science and Engineering Department. gosahin@ku.edu.tr AbstractData-hungry deep neural networks have established themselves as the de facto standard for many NLP tasks, including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind their statistical counterparts in low-resource scenarios. One methodology to counterattack this problem is text augmentation, that is, generating new synthetic training data points from existing data. Although NLP has recently witnessed several new textual augmentation techniques, the field still lacks a systematic performance analysis on a diverse set of languages and sequence tagging tasks. To fill this gap, we investigate three categories of text augmentation methodologies that perform changes on the syntax (e.g., cropping sub-sentences), token (e.g., random word insertion), and character (e.g., character swapping) levels. We systematically compare the methods on part-of-speech tagging, dependency parsing, and semantic role labeling for a diverse set of language families using various models, including the architectures that rely on pretrained multilingual contextualized language models such as mBERT. Augmentation most significantly improves dependency parsing, followed by part-of-speech tagging and semantic role labeling. We find the experimented techniques to be effective on morphologically rich languages in general rather than analytic languages such as Vietnamese. Our results suggest that the augmentation techniques can further improve over strong baselines based on mBERT, especially for dependency parsing. We identify the character-level methods as the most consistent performers, while synonym replacement and syntactic augmenters provide inconsistent improvements. Finally, we discuss that the results most heavily depend on the task, language pair (e.g., syntactic-level techniques mostly benefit higher-level tasks and morphologically richer languages), and model type (e.g., token-level augmentation provides significant improvements for BPE, while character-level ones give generally higher scores for char and mBERT based models).https://direct.mit.edu/coli/article/48/1/5/108844/To-Augment-or-Not-to-Augment-A-Comparative-Study
spellingShingle Gözde Gül Şahin
To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
Computational Linguistics
title To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
title_full To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
title_fullStr To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
title_full_unstemmed To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
title_short To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
title_sort to augment or not to augment a comparative study on text augmentation techniques for low resource nlp
url https://direct.mit.edu/coli/article/48/1/5/108844/To-Augment-or-Not-to-Augment-A-Comparative-Study
work_keys_str_mv AT gozdegulsahin toaugmentornottoaugmentacomparativestudyontextaugmentationtechniquesforlowresourcenlp