Enhancing Conversational Model With Deep Reinforcement Learning and Adversarial Learning

This paper develops a Chatbot conversational model that is aimed to achieve two goals: 1) utilizing contextual information to generate accurate and relevant responses, and 2) implementing strategies to make conversations human-like. We propose a supervised learning approach for model development and...

Full description

Bibliographic Details
Main Authors: Quoc-Dai Luong Tran, Anh-Cuong Le, Van-Nam Huynh
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10189840/
Description
Summary:This paper develops a Chatbot conversational model that is aimed to achieve two goals: 1) utilizing contextual information to generate accurate and relevant responses, and 2) implementing strategies to make conversations human-like. We propose a supervised learning approach for model development and make use of a dataset consisting of multi-turn conversations for model training. In particular, we first develop a module based on deep reinforcement learning to maximize the utilization of contextual information serving as insurance for accurate response generation. Then, we incorporate the response generation process into an adversarial learning framework so as to make the generated response more human-like. Using these two phases in combination eventually results in a unified model that generates semantically appropriate responses that are also expressed naturally as human-generated in the conversation. We conducted various experiments and obtained a significant improvement compared to the baseline and other related studies.
ISSN:2169-3536