Cross-modal graph with meta concepts for video captioning
Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Journal Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/162546 |
_version_ | 1826117633245708288 |
---|---|
author | Wang, Hao Lin, Guosheng Hoi, Steven C. H. Miao, Chunyan |
author2 | School of Computer Science and Engineering |
author_facet | School of Computer Science and Engineering Wang, Hao Lin, Guosheng Hoi, Steven C. H. Miao, Chunyan |
author_sort | Wang, Hao |
collection | NTU |
description | Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets. |
first_indexed | 2024-10-01T04:30:30Z |
format | Journal Article |
id | ntu-10356/162546 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T04:30:30Z |
publishDate | 2022 |
record_format | dspace |
spelling | ntu-10356/1625462023-05-26T15:36:30Z Cross-modal graph with meta concepts for video captioning Wang, Hao Lin, Guosheng Hoi, Steven C. H. Miao, Chunyan School of Computer Science and Engineering Engineering::Computer science and engineering Video Captioning Vision-and-Language Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets. Ministry of Education (MOE) Ministry of Health (MOH) National Research Foundation (NRF) Submitted/Accepted version This work was supported in part by the National Research Foundation (NRF), Singapore, through the AI Singapore Program (AISG) under Award AISG-GC-2019-003 and Award AISG-RP-2018-003 and through the NRF Investigatorship Program (NRFI) under Award NRF-NRFI05-2019-0002; in part by the Singapore Ministry of Health under its National Innovation Challenge on Active and Confident Ageing (NIC) under Project MOH/NIC/HAIG03/2017; and in part by the Ministry of Education (MOE), Singapore, Academic Research Fund (AcRF) Tier-1 Research under Grant RG95/20. 2022-10-31T05:34:15Z 2022-10-31T05:34:15Z 2022 Journal Article Wang, H., Lin, G., Hoi, S. C. H. & Miao, C. (2022). Cross-modal graph with meta concepts for video captioning. IEEE Transactions On Image Processing, 31, 5150-5162. https://dx.doi.org/10.1109/TIP.2022.3192709 1057-7149 https://hdl.handle.net/10356/162546 10.1109/TIP.2022.3192709 35901005 2-s2.0-85135596562 31 5150 5162 en AISG-GC-2019-003 AISG-RP-2018-003 NRF-NRFI05-2019-0002 MOH/NIC/HAIG03/2017 RG95/20 IEEE Transactions on Image Processing © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TIP.2022.3192709. application/pdf |
spellingShingle | Engineering::Computer science and engineering Video Captioning Vision-and-Language Wang, Hao Lin, Guosheng Hoi, Steven C. H. Miao, Chunyan Cross-modal graph with meta concepts for video captioning |
title | Cross-modal graph with meta concepts for video captioning |
title_full | Cross-modal graph with meta concepts for video captioning |
title_fullStr | Cross-modal graph with meta concepts for video captioning |
title_full_unstemmed | Cross-modal graph with meta concepts for video captioning |
title_short | Cross-modal graph with meta concepts for video captioning |
title_sort | cross modal graph with meta concepts for video captioning |
topic | Engineering::Computer science and engineering Video Captioning Vision-and-Language |
url | https://hdl.handle.net/10356/162546 |
work_keys_str_mv | AT wanghao crossmodalgraphwithmetaconceptsforvideocaptioning AT linguosheng crossmodalgraphwithmetaconceptsforvideocaptioning AT hoistevench crossmodalgraphwithmetaconceptsforvideocaptioning AT miaochunyan crossmodalgraphwithmetaconceptsforvideocaptioning |