Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain
Retrieval-based question answering in the automotive domain requires a model to comprehend and articulate relevant domain knowledge, accurately understand user intent, and effectively match the required information. Typically, these systems employ an encoder–retriever architecture. However, existing...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-06-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/11/12/2733 |
_version_ | 1827736617247309824 |
---|---|
author | Zhiyi Luo Sirui Yan Shuyun Luo |
author_facet | Zhiyi Luo Sirui Yan Shuyun Luo |
author_sort | Zhiyi Luo |
collection | DOAJ |
description | Retrieval-based question answering in the automotive domain requires a model to comprehend and articulate relevant domain knowledge, accurately understand user intent, and effectively match the required information. Typically, these systems employ an encoder–retriever architecture. However, existing encoders, which rely on pretrained language models, suffer from limited specialization, insufficient awareness of domain knowledge, and biases in user intent understanding. To overcome these limitations, this paper constructs a Chinese corpus specifically tailored for the automotive domain, comprising question–answer pairs, document collections, and multitask annotated data. Subsequently, a pretraining–multitask fine-tuning framework based on masked language models is introduced to integrate domain knowledge as well as enhance semantic representations, thereby yielding benefits for downstream applications. To evaluate system performance, an evaluation dataset is created using ChatGPT, and a novel retrieval task evaluation metric called mean linear window rank (MLWR) is proposed. Experimental results demonstrate that the proposed system (based on <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>BERT</mi><mrow><mi>b</mi><mi>a</mi><mi>s</mi><mi>e</mi></mrow></msub></semantics></math></inline-formula>), achieves accuracies of 77.5% and 84.75% for Hit@1 and Hit@3, respectively, in the automotive domain retrieval-based question-answering task. Additionally, the MLWR reaches 87.71%. Compared to a system utilizing a general encoder, the proposed multitask fine-tuning strategy shows improvements of 12.5%, 12.5%, and 28.16% for Hit@1, Hit@3, and MLWR, respectively. Furthermore, when compared to the best single-task fine-tuning strategy, the enhancements amount to 0.5%, 1.25%, and 0.95% for Hit@1, Hit@3, and MLWR, respectively. |
first_indexed | 2024-03-11T02:11:50Z |
format | Article |
id | doaj.art-e81e808e394b48c8a299433a59f4dc1a |
institution | Directory Open Access Journal |
issn | 2227-7390 |
language | English |
last_indexed | 2024-03-11T02:11:50Z |
publishDate | 2023-06-01 |
publisher | MDPI AG |
record_format | Article |
series | Mathematics |
spelling | doaj.art-e81e808e394b48c8a299433a59f4dc1a2023-11-18T11:29:01ZengMDPI AGMathematics2227-73902023-06-011112273310.3390/math11122733Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive DomainZhiyi Luo0Sirui Yan1Shuyun Luo2School of Computer Science and Technology and the Key Laboratory of Intelligent Textile and Flexible Interconnection of Zhejiang Province, Zhejiang Sci-Tech University, Hangzhou 310018, ChinaSchool of Computer Science and Technology and the Key Laboratory of Intelligent Textile and Flexible Interconnection of Zhejiang Province, Zhejiang Sci-Tech University, Hangzhou 310018, ChinaSchool of Computer Science and Technology and the Key Laboratory of Intelligent Textile and Flexible Interconnection of Zhejiang Province, Zhejiang Sci-Tech University, Hangzhou 310018, ChinaRetrieval-based question answering in the automotive domain requires a model to comprehend and articulate relevant domain knowledge, accurately understand user intent, and effectively match the required information. Typically, these systems employ an encoder–retriever architecture. However, existing encoders, which rely on pretrained language models, suffer from limited specialization, insufficient awareness of domain knowledge, and biases in user intent understanding. To overcome these limitations, this paper constructs a Chinese corpus specifically tailored for the automotive domain, comprising question–answer pairs, document collections, and multitask annotated data. Subsequently, a pretraining–multitask fine-tuning framework based on masked language models is introduced to integrate domain knowledge as well as enhance semantic representations, thereby yielding benefits for downstream applications. To evaluate system performance, an evaluation dataset is created using ChatGPT, and a novel retrieval task evaluation metric called mean linear window rank (MLWR) is proposed. Experimental results demonstrate that the proposed system (based on <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>BERT</mi><mrow><mi>b</mi><mi>a</mi><mi>s</mi><mi>e</mi></mrow></msub></semantics></math></inline-formula>), achieves accuracies of 77.5% and 84.75% for Hit@1 and Hit@3, respectively, in the automotive domain retrieval-based question-answering task. Additionally, the MLWR reaches 87.71%. Compared to a system utilizing a general encoder, the proposed multitask fine-tuning strategy shows improvements of 12.5%, 12.5%, and 28.16% for Hit@1, Hit@3, and MLWR, respectively. Furthermore, when compared to the best single-task fine-tuning strategy, the enhancements amount to 0.5%, 1.25%, and 0.95% for Hit@1, Hit@3, and MLWR, respectively.https://www.mdpi.com/2227-7390/11/12/2733deep learningpretrained language modelretrieval-based question answeringmultitask learningfine tuning |
spellingShingle | Zhiyi Luo Sirui Yan Shuyun Luo Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain Mathematics deep learning pretrained language model retrieval-based question answering multitask learning fine tuning |
title | Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain |
title_full | Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain |
title_fullStr | Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain |
title_full_unstemmed | Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain |
title_short | Multitask Fine Tuning on Pretrained Language Model for Retrieval-Based Question Answering in Automotive Domain |
title_sort | multitask fine tuning on pretrained language model for retrieval based question answering in automotive domain |
topic | deep learning pretrained language model retrieval-based question answering multitask learning fine tuning |
url | https://www.mdpi.com/2227-7390/11/12/2733 |
work_keys_str_mv | AT zhiyiluo multitaskfinetuningonpretrainedlanguagemodelforretrievalbasedquestionansweringinautomotivedomain AT siruiyan multitaskfinetuningonpretrainedlanguagemodelforretrievalbasedquestionansweringinautomotivedomain AT shuyunluo multitaskfinetuningonpretrainedlanguagemodelforretrievalbasedquestionansweringinautomotivedomain |