TASTA: Text‐Assisted Spatial and Temporal Attention Network for Video Question Answering

Video question answering (VideoQA) is a typical task that integrates language and vision. The key for VideoQA is to extract relevant and effective visual information for answering a specific question. Information selection is believed to be necessary for this task due to the large amount of irreleva...

Full description

Bibliographic Details
Main Authors: Tian Wang, Boyao Hou, Jiakun Li, Peng Shi, Baochang Zhang, Hichem Snoussi
Format: Article
Language:English
Published: Wiley 2023-04-01
Series:Advanced Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1002/aisy.202200131
Description
Summary:Video question answering (VideoQA) is a typical task that integrates language and vision. The key for VideoQA is to extract relevant and effective visual information for answering a specific question. Information selection is believed to be necessary for this task due to the large amount of irrelevant information in the video, and explicitly learning an attention model can be a reasonable and effective solution for the selection. Herein, a novel VideoQA model called Text‐Assisted Spatial and Temporal Attention Network (TASTA) is proposed, which shows the great potential of explicitly modeling attention. TASTA is made to be simple, small, clean, and efficient for clear performance justification and possible easy extension. Its success is mainly from two new strategies of better using the textual information. Experimental results on a large and most representative dataset, TGIF‐QA, show the significant superiority of TASTA w.r.t. the state‐of‐the‐art and demonstrate the effectiveness of its key components via ablation studies.
ISSN:2640-4567