Collaborative Training of Gans in Continuous and Discrete Spaces for Text Generation

Applying generative adversarial networks (GANs) to text-related tasks is challenging due to the discrete nature of language. One line of research resolves this issue by employing reinforcement learning (RL) and optimizing the next-word sampling policy directly in a discrete action space. Such method...

Full description

Bibliographic Details
Main Authors: Yanghoon Kim, Seungpil Won, Seunghyun Yoon, Kyomin Jung
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9296209/
_version_ 1818917824679116800
author Yanghoon Kim
Seungpil Won
Seunghyun Yoon
Kyomin Jung
author_facet Yanghoon Kim
Seungpil Won
Seunghyun Yoon
Kyomin Jung
author_sort Yanghoon Kim
collection DOAJ
description Applying generative adversarial networks (GANs) to text-related tasks is challenging due to the discrete nature of language. One line of research resolves this issue by employing reinforcement learning (RL) and optimizing the next-word sampling policy directly in a discrete action space. Such methods compute the rewards from complete sentences and avoid error accumulation due to exposure bias. Other approaches employ approximation techniques that map the text to continuous representation in order to circumvent the non-differentiable discrete process. Particularly, autoencoder-based methods effectively produce robust representations that can model complex discrete structures. In this article, we propose a novel text GAN architecture that promotes the collaborative training of the continuous-space and discrete-space methods. Our method employs an autoencoder to learn an implicit data manifold, providing a learning objective for adversarial training in a continuous space. Furthermore, the complete textual output is directly evaluated and updated via RL in a discrete space. The collaborative interplay between the two adversarial trainings effectively regularize the text representations in different spaces. The experimental results on three standard benchmark datasets show that our model substantially outperforms state-of-the-art text GANs with respect to quality, diversity, and global consistency.
first_indexed 2024-12-20T00:40:13Z
format Article
id doaj.art-d82f081423c34e1c918dfdddafac96dc
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-20T00:40:13Z
publishDate 2020-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-d82f081423c34e1c918dfdddafac96dc2022-12-21T19:59:38ZengIEEEIEEE Access2169-35362020-01-01822651522652310.1109/ACCESS.2020.30451669296209Collaborative Training of Gans in Continuous and Discrete Spaces for Text GenerationYanghoon Kim0https://orcid.org/0000-0002-9236-0702Seungpil Won1https://orcid.org/0000-0002-3557-4157Seunghyun Yoon2https://orcid.org/0000-0002-7262-3579Kyomin Jung3Department of Electrical and Computer Engineering, Seoul National University, Seoul, South KoreaDepartment of Electrical and Computer Engineering, Seoul National University, Seoul, South KoreaAdobe Research, San Jose, CA, USADepartment of Electrical and Computer Engineering, Seoul National University, Seoul, South KoreaApplying generative adversarial networks (GANs) to text-related tasks is challenging due to the discrete nature of language. One line of research resolves this issue by employing reinforcement learning (RL) and optimizing the next-word sampling policy directly in a discrete action space. Such methods compute the rewards from complete sentences and avoid error accumulation due to exposure bias. Other approaches employ approximation techniques that map the text to continuous representation in order to circumvent the non-differentiable discrete process. Particularly, autoencoder-based methods effectively produce robust representations that can model complex discrete structures. In this article, we propose a novel text GAN architecture that promotes the collaborative training of the continuous-space and discrete-space methods. Our method employs an autoencoder to learn an implicit data manifold, providing a learning objective for adversarial training in a continuous space. Furthermore, the complete textual output is directly evaluated and updated via RL in a discrete space. The collaborative interplay between the two adversarial trainings effectively regularize the text representations in different spaces. The experimental results on three standard benchmark datasets show that our model substantially outperforms state-of-the-art text GANs with respect to quality, diversity, and global consistency.https://ieeexplore.ieee.org/document/9296209/Adversarial trainingcollaborative trainingtext GAN
spellingShingle Yanghoon Kim
Seungpil Won
Seunghyun Yoon
Kyomin Jung
Collaborative Training of Gans in Continuous and Discrete Spaces for Text Generation
IEEE Access
Adversarial training
collaborative training
text GAN
title Collaborative Training of Gans in Continuous and Discrete Spaces for Text Generation
title_full Collaborative Training of Gans in Continuous and Discrete Spaces for Text Generation
title_fullStr Collaborative Training of Gans in Continuous and Discrete Spaces for Text Generation
title_full_unstemmed Collaborative Training of Gans in Continuous and Discrete Spaces for Text Generation
title_short Collaborative Training of Gans in Continuous and Discrete Spaces for Text Generation
title_sort collaborative training of gans in continuous and discrete spaces for text generation
topic Adversarial training
collaborative training
text GAN
url https://ieeexplore.ieee.org/document/9296209/
work_keys_str_mv AT yanghoonkim collaborativetrainingofgansincontinuousanddiscretespacesfortextgeneration
AT seungpilwon collaborativetrainingofgansincontinuousanddiscretespacesfortextgeneration
AT seunghyunyoon collaborativetrainingofgansincontinuousanddiscretespacesfortextgeneration
AT kyominjung collaborativetrainingofgansincontinuousanddiscretespacesfortextgeneration