Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation
AbstractHuman evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions. While there has been considerable research on human evaluation, the field stil...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2021-01-01
|
Series: | Transactions of the Association for Computational Linguistics |
Online Access: | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00437/108866/Experts-Errors-and-Context-A-Large-Scale-Study-of |
_version_ | 1818215377658707968 |
---|---|
author | Markus Freitag George Foster David Grangier Viresh Ratnakar Qijun Tan Wolfgang Macherey |
author_facet | Markus Freitag George Foster David Grangier Viresh Ratnakar Qijun Tan Wolfgang Macherey |
author_sort | Markus Freitag |
collection | DOAJ |
description |
AbstractHuman evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions. While there has been considerable research on human evaluation, the field still lacks a commonly accepted standard procedure. As a step toward this goal, we propose an evaluation methodology grounded in explicit error analysis, based on the Multidimensional Quality Metrics (MQM) framework. We carry out the largest MQM research study to date, scoring the outputs of top systems from the WMT 2020 shared task in two language pairs using annotations provided by professional translators with access to full document context. We analyze the resulting data extensively, finding among other results a substantially different ranking of evaluated systems from the one established by the WMT crowd workers, exhibiting a clear preference for human over machine output. Surprisingly, we also find that automatic metrics based on pre-trained embeddings can outperform human crowd workers. We make our corpus publicly available for further research. |
first_indexed | 2024-12-12T06:35:07Z |
format | Article |
id | doaj.art-40f406ebbb294289a01d49fc645aacb2 |
institution | Directory Open Access Journal |
issn | 2307-387X |
language | English |
last_indexed | 2024-12-12T06:35:07Z |
publishDate | 2021-01-01 |
publisher | The MIT Press |
record_format | Article |
series | Transactions of the Association for Computational Linguistics |
spelling | doaj.art-40f406ebbb294289a01d49fc645aacb22022-12-22T00:34:29ZengThe MIT PressTransactions of the Association for Computational Linguistics2307-387X2021-01-0191460147410.1162/tacl_a_00437Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine TranslationMarkus Freitag0George Foster1David Grangier2Viresh Ratnakar3Qijun Tan4Wolfgang Macherey5Google Research. freitag@google.comGoogle Research. fosterg@google.comGoogle Research. grangier@google.comGoogle Research. vratnakar@google.comGoogle Research. qijuntan@google.comGoogle Research. wmach@google.com AbstractHuman evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions. While there has been considerable research on human evaluation, the field still lacks a commonly accepted standard procedure. As a step toward this goal, we propose an evaluation methodology grounded in explicit error analysis, based on the Multidimensional Quality Metrics (MQM) framework. We carry out the largest MQM research study to date, scoring the outputs of top systems from the WMT 2020 shared task in two language pairs using annotations provided by professional translators with access to full document context. We analyze the resulting data extensively, finding among other results a substantially different ranking of evaluated systems from the one established by the WMT crowd workers, exhibiting a clear preference for human over machine output. Surprisingly, we also find that automatic metrics based on pre-trained embeddings can outperform human crowd workers. We make our corpus publicly available for further research.https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00437/108866/Experts-Errors-and-Context-A-Large-Scale-Study-of |
spellingShingle | Markus Freitag George Foster David Grangier Viresh Ratnakar Qijun Tan Wolfgang Macherey Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation Transactions of the Association for Computational Linguistics |
title | Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation |
title_full | Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation |
title_fullStr | Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation |
title_full_unstemmed | Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation |
title_short | Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation |
title_sort | experts errors and context a large scale study of human evaluation for machine translation |
url | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00437/108866/Experts-Errors-and-Context-A-Large-Scale-Study-of |
work_keys_str_mv | AT markusfreitag expertserrorsandcontextalargescalestudyofhumanevaluationformachinetranslation AT georgefoster expertserrorsandcontextalargescalestudyofhumanevaluationformachinetranslation AT davidgrangier expertserrorsandcontextalargescalestudyofhumanevaluationformachinetranslation AT vireshratnakar expertserrorsandcontextalargescalestudyofhumanevaluationformachinetranslation AT qijuntan expertserrorsandcontextalargescalestudyofhumanevaluationformachinetranslation AT wolfgangmacherey expertserrorsandcontextalargescalestudyofhumanevaluationformachinetranslation |