Partially randomizing transformer weights for dialogue response diversity

Despite recent progress in generative open-domain dialogue, the issue of low response diversity persists. Prior works have addressed this issue via either novel objective functions, alternative learning approaches such as variational frameworks, or architectural extensions such as the Randomized...

Full description

Bibliographic Details
Main Authors: Lee, Jing Yang, Lee, Kong Aik, Gan, Woon-Seng
Other Authors: School of Electrical and Electronic Engineering
Format: Conference Paper
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172416
https://paclic2023.github.io/
Description
Summary:Despite recent progress in generative open-domain dialogue, the issue of low response diversity persists. Prior works have addressed this issue via either novel objective functions, alternative learning approaches such as variational frameworks, or architectural extensions such as the Randomized Link (RL) Transformer. However, these approaches typically entail either additional difficulties during training/inference, or a significant increase in model size and complexity. Hence, we propose the \underline{Pa}rtially \underline{Ra}ndomized trans\underline{Former} (PaRaFormer), a simple extension of the transformer which involves freezing the weights of selected layers after random initialization. Experimental results reveal that the performance of the PaRaformer is comparable to that of the aforementioned approaches, despite not entailing any additional training difficulty or increase in model complexity.