Parameter-Efficient Fine-Tuning Method for Task-Oriented Dialogue Systems
The use of Transformer-based pre-trained language models has become prevalent in enhancing the performance of task-oriented dialogue systems. These models, which are pre-trained on large text data to grasp the language syntax and semantics, fine-tune the entire parameter set according to a specific...
Main Authors: | Yunho Mo, Joon Yoo, Sangwoo Kang |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-07-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/11/14/3048 |
Similar Items
-
Intermediate Task Fine-Tuning in Cancer Classification
by: Mario Alejandro García, et al.
Published: (2023-10-01) -
Variational Reward Estimator Bottleneck: Towards Robust Reward Estimator for Multidomain Task-Oriented Dialogue
by: Jeiyoon Park, et al.
Published: (2021-07-01) -
Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
by: Yahao Hu, et al.
Published: (2023-10-01) -
Performance Improvement on Traditional Chinese Task-Oriented Dialogue Systems With Reinforcement Learning and Regularized Dropout Technique
by: Jeng-Shin Sheu, et al.
Published: (2023-01-01) -
Improved Spoken Language Representation for Intent Understanding in a Task-Oriented Dialogue System
by: June-Woo Kim, et al.
Published: (2022-02-01)