A Video Question Answering Model Based on Knowledge Distillation
Video question answering (QA) is a cross-modal task that requires understanding the video content to answer questions. Current techniques address this challenge by employing stacked modules, such as attention mechanisms and graph convolutional networks. These methods reason about the semantics of vi...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-06-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/14/6/328 |
_version_ | 1797594188045352960 |
---|---|
author | Zhuang Shao Jiahui Wan Linlin Zong |
author_facet | Zhuang Shao Jiahui Wan Linlin Zong |
author_sort | Zhuang Shao |
collection | DOAJ |
description | Video question answering (QA) is a cross-modal task that requires understanding the video content to answer questions. Current techniques address this challenge by employing stacked modules, such as attention mechanisms and graph convolutional networks. These methods reason about the semantics of video features and their interaction with text-based questions, yielding excellent results. However, these approaches often learn and fuse features representing different aspects of the video separately, neglecting the intra-interaction and overlooking the latent complex correlations between the extracted features. Additionally, the stacking of modules introduces a large number of parameters, making model training more challenging. To address these issues, we propose a novel multimodal knowledge distillation method that leverages the strengths of knowledge distillation for model compression and feature enhancement. Specifically, the fused features in the larger teacher model are distilled into knowledge, which guides the learning of appearance and motion features in the smaller student model. By incorporating cross-modal information in the early stages, the appearance and motion features can discover their related and complementary potential relationships, thus improving the overall model performance. Despite its simplicity, our extensive experiments on the widely used video QA datasets, MSVD-QA and MSRVTT-QA, demonstrate clear performance improvements over prior methods. These results validate the effectiveness of the proposed knowledge distillation approach. |
first_indexed | 2024-03-11T02:20:06Z |
format | Article |
id | doaj.art-1c1a04e5c906460588b10a7e4cef71a4 |
institution | Directory Open Access Journal |
issn | 2078-2489 |
language | English |
last_indexed | 2024-03-11T02:20:06Z |
publishDate | 2023-06-01 |
publisher | MDPI AG |
record_format | Article |
series | Information |
spelling | doaj.art-1c1a04e5c906460588b10a7e4cef71a42023-11-18T10:54:34ZengMDPI AGInformation2078-24892023-06-0114632810.3390/info14060328A Video Question Answering Model Based on Knowledge DistillationZhuang Shao0Jiahui Wan1Linlin Zong2China Academy of Space Technology, Beijing 100094, ChinaKey Laboratory for Ubiquitous Network and Service Software of Liaoning Province, School of Software, Dalian University of Technology, Dalian 116620, ChinaKey Laboratory for Ubiquitous Network and Service Software of Liaoning Province, School of Software, Dalian University of Technology, Dalian 116620, ChinaVideo question answering (QA) is a cross-modal task that requires understanding the video content to answer questions. Current techniques address this challenge by employing stacked modules, such as attention mechanisms and graph convolutional networks. These methods reason about the semantics of video features and their interaction with text-based questions, yielding excellent results. However, these approaches often learn and fuse features representing different aspects of the video separately, neglecting the intra-interaction and overlooking the latent complex correlations between the extracted features. Additionally, the stacking of modules introduces a large number of parameters, making model training more challenging. To address these issues, we propose a novel multimodal knowledge distillation method that leverages the strengths of knowledge distillation for model compression and feature enhancement. Specifically, the fused features in the larger teacher model are distilled into knowledge, which guides the learning of appearance and motion features in the smaller student model. By incorporating cross-modal information in the early stages, the appearance and motion features can discover their related and complementary potential relationships, thus improving the overall model performance. Despite its simplicity, our extensive experiments on the widely used video QA datasets, MSVD-QA and MSRVTT-QA, demonstrate clear performance improvements over prior methods. These results validate the effectiveness of the proposed knowledge distillation approach.https://www.mdpi.com/2078-2489/14/6/328video question answeringmultimodal fusionknowledge distillation |
spellingShingle | Zhuang Shao Jiahui Wan Linlin Zong A Video Question Answering Model Based on Knowledge Distillation Information video question answering multimodal fusion knowledge distillation |
title | A Video Question Answering Model Based on Knowledge Distillation |
title_full | A Video Question Answering Model Based on Knowledge Distillation |
title_fullStr | A Video Question Answering Model Based on Knowledge Distillation |
title_full_unstemmed | A Video Question Answering Model Based on Knowledge Distillation |
title_short | A Video Question Answering Model Based on Knowledge Distillation |
title_sort | video question answering model based on knowledge distillation |
topic | video question answering multimodal fusion knowledge distillation |
url | https://www.mdpi.com/2078-2489/14/6/328 |
work_keys_str_mv | AT zhuangshao avideoquestionansweringmodelbasedonknowledgedistillation AT jiahuiwan avideoquestionansweringmodelbasedonknowledgedistillation AT linlinzong avideoquestionansweringmodelbasedonknowledgedistillation AT zhuangshao videoquestionansweringmodelbasedonknowledgedistillation AT jiahuiwan videoquestionansweringmodelbasedonknowledgedistillation AT linlinzong videoquestionansweringmodelbasedonknowledgedistillation |