Active learning and crowdsourcing: a survey of annotation optimization methods

High quality labeled corpora play a key role to elaborate machine learning systems. Generally, creating of such corpora requires human efforts. So, annotation process is expensive and time-consuming. Two approaches that optimize the annotation are active learning and crowdsourcing. Methods of active...

Full description

Bibliographic Details
Main Authors: R. A. Gilyazev, D. Y. Turdakov
Format: Article
Language:English
Published: Ivannikov Institute for System Programming of the Russian Academy of Sciences 2018-10-01
Series:Труды Института системного программирования РАН
Subjects:
Online Access:https://ispranproceedings.elpub.ru/jour/article/view/489
_version_ 1818471682592997376
author R. A. Gilyazev
D. Y. Turdakov
author_facet R. A. Gilyazev
D. Y. Turdakov
author_sort R. A. Gilyazev
collection DOAJ
description High quality labeled corpora play a key role to elaborate machine learning systems. Generally, creating of such corpora requires human efforts. So, annotation process is expensive and time-consuming. Two approaches that optimize the annotation are active learning and crowdsourcing. Methods of active learning are aimed at finding the most informative examples for the classifier. At each iteration from the unplaced set, one algorithm is chosen by an algorithm, it is provided to the oracle (expert) for the markup and the classifier is newly trained on the updated set of training examples. Crowdsourcing is widely used in solving problems that can not be automated and require human effort. To get the most out of using crowdplatforms one needs to to solve three problems. The first of these is quality, that is, algorithms are needed that will best determine the real labels from the available ones. Of course, it is necessary to remember the cost of markup - to solve the problem by increasing the number of annotators for one example is not always reasonable - this is the second problem. And, thirdly, sometimes the immediate factor is the rapid receipt of the marked corpus, then it is necessary to minimize the time delays when the participants perform the task. This paper aims to survey existing methods based on this approaches and techniques to combine them. Also, the paper describes the systems that help to reduce the cost of annotation.
first_indexed 2024-04-14T03:54:50Z
format Article
id doaj.art-cebfe95f04e24128b48f8ff570ffe5d4
institution Directory Open Access Journal
issn 2079-8156
2220-6426
language English
last_indexed 2024-04-14T03:54:50Z
publishDate 2018-10-01
publisher Ivannikov Institute for System Programming of the Russian Academy of Sciences
record_format Article
series Труды Института системного программирования РАН
spelling doaj.art-cebfe95f04e24128b48f8ff570ffe5d42022-12-22T02:13:51ZengIvannikov Institute for System Programming of the Russian Academy of SciencesТруды Института системного программирования РАН2079-81562220-64262018-10-0130221525010.15514/ISPRAS-2018-30(2)-11489Active learning and crowdsourcing: a survey of annotation optimization methodsR. A. Gilyazev0D. Y. Turdakov1Институт системного программирования им. В.П. Иванникова РАН; Московский физико-технический институтИнститут системного программирования им. В.П. Иванникова РАН; Московский государственный университет имени М.В. Ломоносова; Национальный исследовательский университет Высшая школа экономикиHigh quality labeled corpora play a key role to elaborate machine learning systems. Generally, creating of such corpora requires human efforts. So, annotation process is expensive and time-consuming. Two approaches that optimize the annotation are active learning and crowdsourcing. Methods of active learning are aimed at finding the most informative examples for the classifier. At each iteration from the unplaced set, one algorithm is chosen by an algorithm, it is provided to the oracle (expert) for the markup and the classifier is newly trained on the updated set of training examples. Crowdsourcing is widely used in solving problems that can not be automated and require human effort. To get the most out of using crowdplatforms one needs to to solve three problems. The first of these is quality, that is, algorithms are needed that will best determine the real labels from the available ones. Of course, it is necessary to remember the cost of markup - to solve the problem by increasing the number of annotators for one example is not always reasonable - this is the second problem. And, thirdly, sometimes the immediate factor is the rapid receipt of the marked corpus, then it is necessary to minimize the time delays when the participants perform the task. This paper aims to survey existing methods based on this approaches and techniques to combine them. Also, the paper describes the systems that help to reduce the cost of annotation.https://ispranproceedings.elpub.ru/jour/article/view/489активное обучениекраудсорсинганнотация корпусовкрауд-вычисления
spellingShingle R. A. Gilyazev
D. Y. Turdakov
Active learning and crowdsourcing: a survey of annotation optimization methods
Труды Института системного программирования РАН
активное обучение
краудсорсинг
аннотация корпусов
крауд-вычисления
title Active learning and crowdsourcing: a survey of annotation optimization methods
title_full Active learning and crowdsourcing: a survey of annotation optimization methods
title_fullStr Active learning and crowdsourcing: a survey of annotation optimization methods
title_full_unstemmed Active learning and crowdsourcing: a survey of annotation optimization methods
title_short Active learning and crowdsourcing: a survey of annotation optimization methods
title_sort active learning and crowdsourcing a survey of annotation optimization methods
topic активное обучение
краудсорсинг
аннотация корпусов
крауд-вычисления
url https://ispranproceedings.elpub.ru/jour/article/view/489
work_keys_str_mv AT ragilyazev activelearningandcrowdsourcingasurveyofannotationoptimizationmethods
AT dyturdakov activelearningandcrowdsourcingasurveyofannotationoptimizationmethods