An Automatic Error Detection Method for Machine Translation Results via Deep Learning

Nowadays, the rapid development of natural language processing has brought great progress for the area of machine translation. Various deep neural network-based machine translation approaches have been more and more general. However, there still lacks effective automatic error detection approaches f...

Full description

Bibliographic Details
Main Author: Weihong Zhang
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10138173/
Description
Summary:Nowadays, the rapid development of natural language processing has brought great progress for the area of machine translation. Various deep neural network-based machine translation approaches have been more and more general. However, there still lacks effective automatic error detection approaches for machine translation results. To bridge such gap, this paper proposes an automatic error detection method for machine translation results via deep learning. The training data is synthesized using the deep generative model proposed in this paper, which is used for the training of the foreign trade English grammatical error correction model. Then, the grammatical error correction model is used to correct the source sentences in the learner’s corpus, and the corrected target sentences and the manually annotated standard sentences are formed into “error-correct” sentence pairs, which are fed back to the error generation model for alternate training. By establishing a link between the grammatical error detection model and the grammatical error correction model, the error detection and correction capability of the model is improved. Experiments on datasets such as GTRSB show that the proposed error detection method significantly improves the stealthiness of the trigger while ensuring the effectiveness of the backdoor attack, and at the same time enables the trigger to resist certain data augmentation operations.
ISSN:2169-3536