To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
AbstractData-hungry deep neural networks have established themselves as the de facto standard for many NLP tasks, including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind their statistical counterpa...
Main Author: | Gözde Gül Şahin |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2022-01-01
|
Series: | Computational Linguistics |
Online Access: | https://direct.mit.edu/coli/article/48/1/5/108844/To-Augment-or-Not-to-Augment-A-Comparative-Study |
Similar Items
-
Leveraging Advanced NLP Techniques and Data Augmentation to Enhance Online Misogyny Detection
by: Alaa Mohasseb, et al.
Published: (2025-01-01) -
An Empirical Survey of Data Augmentation for Limited Data Learning in NLP
by: Jiaao Chen, et al.
Published: (2023-01-01) -
PlethAugment: GAN-based PPG augmentation for medical diagnosis in low-resource settings
by: Kiyasseh, D, et al.
Published: (2020) -
Tailored text augmentation for sentiment analysis
by: Feng, Zijian, et al.
Published: (2022) -
Text Data Augmentation for the Korean Language
by: Dang Thanh Vu, et al.
Published: (2022-03-01)