Pre-Training on Mixed Data for Low-Resource Neural Machine Translation
The pre-training fine-tuning mode has been shown to be effective for low resource neural machine translation. In this mode, pre-training models trained on monolingual data are used to initiate translation models to transfer knowledge from monolingual data into translation models. In recent years, pr...
Main Authors: | Wenbo Zhang, Xiao Li, Yating Yang, Rui Dong |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-03-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/12/3/133 |
Similar Items
-
A Diverse Data Augmentation Strategy for Low-Resource Neural Machine Translation
by: Yu Li, et al.
Published: (2020-05-01) -
Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation
by: Wenbo Zhang, et al.
Published: (2020-11-01) -
Research on the robustness of neural machine translation systems in word order perturbation
by: Yuran ZHAO, Tang XUE, Gongshen LIU
Published: (2023-10-01) -
Hierarchical Transfer Learning Architecture for Low-Resource Neural Machine Translation
by: Gongxu Luo, et al.
Published: (2019-01-01) -
Sharing high-quality language resources in the legal domain to develop neural machine translation for under-resourced European languages
by: Federico Gaspari, et al.
Published: (2022-12-01)