Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain
Retrieval-based question answering in the automotive domain requires a model to comprehend and articulate relevant domain knowledge, accurately understand user intent, and effectively match the required information. Typically, these systems employ an encoder–retriever architecture. However, existing...
Main Authors: | Zhiyi Luo, Sirui Yan, Shuyun Luo |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-06-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/11/12/2733 |
Similar Items
-
Ask me in your own words: paraphrasing for multitask question answering
by: G. Thomas Hudson, et al.
Published: (2021-10-01) -
QARR-FSQA: Question-Answer Replacement and Removal Pretraining Framework for Few-Shot Question Answering
by: Siao Wah Tan, et al.
Published: (2024-01-01) -
Harnessing the Power of Metadata for Enhanced Question Retrieval in Community Question Answering
by: Shima Ghasemi, et al.
Published: (2024-01-01) -
Arabic Question Answering Systems: Gap Analysis
by: Mariam M. Biltawi, et al.
Published: (2021-01-01) -
Promoting convergence and efficacy of open‐domain question answering via unsupervised clustering
by: Shuoyan Liu, et al.
Published: (2024-08-01)