ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems

The burgeoning popularity of community question-answering platforms as an information-seeking strategy has prompted researchers to look for ways to save response time and effort, among which question entailment recognizing, question summarizing, and question tagging are prominent. However, none has...

Full description

Bibliographic Details
Main Authors: Seyyede Zahra Aftabi, Seyyede Maryam Seyyedi, Mohammad Maleki, Saeed Farzi
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10413543/
_version_ 1797323892666138624
author Seyyede Zahra Aftabi
Seyyede Maryam Seyyedi
Mohammad Maleki
Saeed Farzi
author_facet Seyyede Zahra Aftabi
Seyyede Maryam Seyyedi
Mohammad Maleki
Saeed Farzi
author_sort Seyyede Zahra Aftabi
collection DOAJ
description The burgeoning popularity of community question-answering platforms as an information-seeking strategy has prompted researchers to look for ways to save response time and effort, among which question entailment recognizing, question summarizing, and question tagging are prominent. However, none has investigated the implicit relations between these tasks and the benefits their interaction could provide. In this study, ReQuEST, a novel multi-task model based on bidirectional auto-regressive transformers (BART), is introduced to recognize question entailment, summarize questions respecting given queries, and tag questions with primary topics, simultaneously. ReQuEST comprises one shared encoder representing input sequences, two half-shared decoders providing intermediate presentations, and three task-specific heads producing summaries, tags, and entailed questions. A lightweight fine-tuning technique and a weighted loss function help us learn model parameters efficiently. With roughly 187k learning parameters, ReQuEST is almost half the size of BARTlarge and is two-thirds smaller than its multi-task counterparts. Empirical experiments on standard summarization datasets reveal that ReQuEST outperforms competitors on Debatepedia with a Rouge-L of 46.77 and has persuasive performance with a Rouge-L of 37.37 on MeQSum. On MediQA-RQE as a medical benchmark for entailment recognition, ReQuEST is also comparable in accuracy with state-of-the-art systems without being pre-trained on domain-specific datasets.
first_indexed 2024-03-08T05:34:41Z
format Article
id doaj.art-2139a6b41c164c68b0a01bf25509864f
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-08T05:34:41Z
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-2139a6b41c164c68b0a01bf25509864f2024-02-06T00:01:03ZengIEEEIEEE Access2169-35362024-01-0112171371715110.1109/ACCESS.2024.335828710413543ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering SystemsSeyyede Zahra Aftabi0https://orcid.org/0000-0002-3651-9723Seyyede Maryam Seyyedi1https://orcid.org/0000-0002-3653-9214Mohammad Maleki2https://orcid.org/0000-0001-5726-6596Saeed Farzi3https://orcid.org/0000-0003-2850-0616Faculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, IranFaculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, IranFaculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, IranFaculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, IranThe burgeoning popularity of community question-answering platforms as an information-seeking strategy has prompted researchers to look for ways to save response time and effort, among which question entailment recognizing, question summarizing, and question tagging are prominent. However, none has investigated the implicit relations between these tasks and the benefits their interaction could provide. In this study, ReQuEST, a novel multi-task model based on bidirectional auto-regressive transformers (BART), is introduced to recognize question entailment, summarize questions respecting given queries, and tag questions with primary topics, simultaneously. ReQuEST comprises one shared encoder representing input sequences, two half-shared decoders providing intermediate presentations, and three task-specific heads producing summaries, tags, and entailed questions. A lightweight fine-tuning technique and a weighted loss function help us learn model parameters efficiently. With roughly 187k learning parameters, ReQuEST is almost half the size of BARTlarge and is two-thirds smaller than its multi-task counterparts. Empirical experiments on standard summarization datasets reveal that ReQuEST outperforms competitors on Debatepedia with a Rouge-L of 46.77 and has persuasive performance with a Rouge-L of 37.37 on MeQSum. On MediQA-RQE as a medical benchmark for entailment recognition, ReQuEST is also comparable in accuracy with state-of-the-art systems without being pre-trained on domain-specific datasets.https://ieeexplore.ieee.org/document/10413543/Community question answering systemsmulti-task learningquery-focused question summarizationquestion entailmenttag generation
spellingShingle Seyyede Zahra Aftabi
Seyyede Maryam Seyyedi
Mohammad Maleki
Saeed Farzi
ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems
IEEE Access
Community question answering systems
multi-task learning
query-focused question summarization
question entailment
tag generation
title ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems
title_full ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems
title_fullStr ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems
title_full_unstemmed ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems
title_short ReQuEST: A Small-Scale Multi-Task Model for Community Question-Answering Systems
title_sort request a small scale multi task model for community question answering systems
topic Community question answering systems
multi-task learning
query-focused question summarization
question entailment
tag generation
url https://ieeexplore.ieee.org/document/10413543/
work_keys_str_mv AT seyyedezahraaftabi requestasmallscalemultitaskmodelforcommunityquestionansweringsystems
AT seyyedemaryamseyyedi requestasmallscalemultitaskmodelforcommunityquestionansweringsystems
AT mohammadmaleki requestasmallscalemultitaskmodelforcommunityquestionansweringsystems
AT saeedfarzi requestasmallscalemultitaskmodelforcommunityquestionansweringsystems