Approximating interactive human evaluation with self-play for open-domain dialog systems

© 2019 Neural information processing systems foundation. All rights reserved. Building an open-domain conversational agent is a challenging problem. Current evaluation methods, mostly post-hoc judgments of static conversation, do not capture conversation quality in a realistic interactive context. I...

Full description

Bibliographic Details
Main Authors: Ghandeharioun, A, Shen, JH, Jaques, N, Ferguson, C, Jones, N, Lapedriza, A, Picard, R
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/137062
_version_ 1826206204723986432
author Ghandeharioun, A
Shen, JH
Jaques, N
Ferguson, C
Jones, N
Lapedriza, A
Picard, R
author_facet Ghandeharioun, A
Shen, JH
Jaques, N
Ferguson, C
Jones, N
Lapedriza, A
Picard, R
author_sort Ghandeharioun, A
collection MIT
description © 2019 Neural information processing systems foundation. All rights reserved. Building an open-domain conversational agent is a challenging problem. Current evaluation methods, mostly post-hoc judgments of static conversation, do not capture conversation quality in a realistic interactive context. In this paper, we investigate interactive human evaluation and provide evidence for its necessity; we then introduce a novel, model-agnostic, and dataset-agnostic method to approximate it. In particular, we propose a self-play scenario where the dialog system talks to itself and we calculate a combination of proxies such as sentiment and semantic coherence on the conversation trajectory. We show that this metric is capable of capturing the human-rated quality of a dialog model better than any automated metric known to-date, achieving a significant Pearson correlation (r >.7, p <.05). To investigate the strengths of this novel metric and interactive evaluation in comparison to state-of-the-art metrics and human evaluation of static conversations, we perform extended experiments with a set of models, including several that make novel improvements to recent hierarchical dialog generation architectures through sentiment and semantic knowledge distillation on the utterance level. Finally, we open-source the interactive evaluation platform we built and the dataset we collected to allow researchers to efficiently deploy and evaluate dialog models.
first_indexed 2024-09-23T13:25:44Z
format Article
id mit-1721.1/137062
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:25:44Z
publishDate 2021
record_format dspace
spelling mit-1721.1/1370622021-11-03T03:19:34Z Approximating interactive human evaluation with self-play for open-domain dialog systems Ghandeharioun, A Shen, JH Jaques, N Ferguson, C Jones, N Lapedriza, A Picard, R © 2019 Neural information processing systems foundation. All rights reserved. Building an open-domain conversational agent is a challenging problem. Current evaluation methods, mostly post-hoc judgments of static conversation, do not capture conversation quality in a realistic interactive context. In this paper, we investigate interactive human evaluation and provide evidence for its necessity; we then introduce a novel, model-agnostic, and dataset-agnostic method to approximate it. In particular, we propose a self-play scenario where the dialog system talks to itself and we calculate a combination of proxies such as sentiment and semantic coherence on the conversation trajectory. We show that this metric is capable of capturing the human-rated quality of a dialog model better than any automated metric known to-date, achieving a significant Pearson correlation (r >.7, p <.05). To investigate the strengths of this novel metric and interactive evaluation in comparison to state-of-the-art metrics and human evaluation of static conversations, we perform extended experiments with a set of models, including several that make novel improvements to recent hierarchical dialog generation architectures through sentiment and semantic knowledge distillation on the utterance level. Finally, we open-source the interactive evaluation platform we built and the dataset we collected to allow researchers to efficiently deploy and evaluate dialog models. 2021-11-02T12:16:27Z 2021-11-02T12:16:27Z 2019 2021-07-06T13:41:07Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137062 Ghandeharioun, A, Shen, JH, Jaques, N, Ferguson, C, Jones, N et al. "Approximating interactive human evaluation with self-play for open-domain dialog systems." Advances in Neural Information Processing Systems, 32. en https://proceedings.neurips.cc/paper/2019/file/fc9812127bf09c7bd29ad6723c683fb5-Paper.pdf Advances in Neural Information Processing Systems Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Neural Information Processing Systems (NIPS)
spellingShingle Ghandeharioun, A
Shen, JH
Jaques, N
Ferguson, C
Jones, N
Lapedriza, A
Picard, R
Approximating interactive human evaluation with self-play for open-domain dialog systems
title Approximating interactive human evaluation with self-play for open-domain dialog systems
title_full Approximating interactive human evaluation with self-play for open-domain dialog systems
title_fullStr Approximating interactive human evaluation with self-play for open-domain dialog systems
title_full_unstemmed Approximating interactive human evaluation with self-play for open-domain dialog systems
title_short Approximating interactive human evaluation with self-play for open-domain dialog systems
title_sort approximating interactive human evaluation with self play for open domain dialog systems
url https://hdl.handle.net/1721.1/137062
work_keys_str_mv AT ghandehariouna approximatinginteractivehumanevaluationwithselfplayforopendomaindialogsystems
AT shenjh approximatinginteractivehumanevaluationwithselfplayforopendomaindialogsystems
AT jaquesn approximatinginteractivehumanevaluationwithselfplayforopendomaindialogsystems
AT fergusonc approximatinginteractivehumanevaluationwithselfplayforopendomaindialogsystems
AT jonesn approximatinginteractivehumanevaluationwithselfplayforopendomaindialogsystems
AT lapedrizaa approximatinginteractivehumanevaluationwithselfplayforopendomaindialogsystems
AT picardr approximatinginteractivehumanevaluationwithselfplayforopendomaindialogsystems