Dense video captioning based on local attention

Abstract Dense video captioning aims to locate multiple events in an untrimmed video and generate captions for each event. Previous methods experienced difficulties in establishing the multimodal feature relationship between frames and captions, resulting in low accuracy of the generated captions. T...

Full description

Bibliographic Details
Main Authors: Yong Qian, Yingchi Mao, Zhihao Chen, Chang Li, Olano Teah Bloh, Qian Huang
Format: Article
Language:English
Published: Wiley 2023-07-01
Series:IET Image Processing
Subjects:
Online Access:https://doi.org/10.1049/ipr2.12819
_version_ 1797786087503953920
author Yong Qian
Yingchi Mao
Zhihao Chen
Chang Li
Olano Teah Bloh
Qian Huang
author_facet Yong Qian
Yingchi Mao
Zhihao Chen
Chang Li
Olano Teah Bloh
Qian Huang
author_sort Yong Qian
collection DOAJ
description Abstract Dense video captioning aims to locate multiple events in an untrimmed video and generate captions for each event. Previous methods experienced difficulties in establishing the multimodal feature relationship between frames and captions, resulting in low accuracy of the generated captions. To address this problem, a novel Dense Video Captioning Model Based on Local Attention (DVCL) is proposed. DVCL employs a 2D temporal differential CNN to extract video features, followed by feature encoding using a deformable transformer that establishes the global feature dependence of the input sequence. Then DIoU and TIoU are incorporated into the event proposal match algorithm and evaluation algorithm during training, to yield more accurate event proposals and hence increase the quality of the captions. Furthermore, an LSTM based on local attention is designed to generate captions, enabling each word in the captions to correspond to the relevant frame. Extensive experimental results demonstrate the effectiveness of DVCL. On the ActivityNet Captions dataset, DVCL performs significantly better than other baselines, with improvements of 5.6%, 8.2%, and 15.8% over the best baseline in BLEU4, METEOR, and CIDEr, respectively.
first_indexed 2024-03-13T01:04:00Z
format Article
id doaj.art-81daafde3214439294b202442683190a
institution Directory Open Access Journal
issn 1751-9659
1751-9667
language English
last_indexed 2024-03-13T01:04:00Z
publishDate 2023-07-01
publisher Wiley
record_format Article
series IET Image Processing
spelling doaj.art-81daafde3214439294b202442683190a2023-07-06T09:05:41ZengWileyIET Image Processing1751-96591751-96672023-07-011792673268510.1049/ipr2.12819Dense video captioning based on local attentionYong Qian0Yingchi Mao1Zhihao Chen2Chang Li3Olano Teah Bloh4Qian Huang5School of Computer and Information Hohai University Nanjing ChinaSchool of Computer and Information Hohai University Nanjing ChinaSchool of Computer and Information Hohai University Nanjing ChinaSchool of Computer and Information Hohai University Nanjing ChinaSchool of Computer and Information Hohai University Nanjing ChinaSchool of Computer and Information Hohai University Nanjing ChinaAbstract Dense video captioning aims to locate multiple events in an untrimmed video and generate captions for each event. Previous methods experienced difficulties in establishing the multimodal feature relationship between frames and captions, resulting in low accuracy of the generated captions. To address this problem, a novel Dense Video Captioning Model Based on Local Attention (DVCL) is proposed. DVCL employs a 2D temporal differential CNN to extract video features, followed by feature encoding using a deformable transformer that establishes the global feature dependence of the input sequence. Then DIoU and TIoU are incorporated into the event proposal match algorithm and evaluation algorithm during training, to yield more accurate event proposals and hence increase the quality of the captions. Furthermore, an LSTM based on local attention is designed to generate captions, enabling each word in the captions to correspond to the relevant frame. Extensive experimental results demonstrate the effectiveness of DVCL. On the ActivityNet Captions dataset, DVCL performs significantly better than other baselines, with improvements of 5.6%, 8.2%, and 15.8% over the best baseline in BLEU4, METEOR, and CIDEr, respectively.https://doi.org/10.1049/ipr2.128192D temporal differential CNNdense video captioningevent proposalfeature extractionlocal attention
spellingShingle Yong Qian
Yingchi Mao
Zhihao Chen
Chang Li
Olano Teah Bloh
Qian Huang
Dense video captioning based on local attention
IET Image Processing
2D temporal differential CNN
dense video captioning
event proposal
feature extraction
local attention
title Dense video captioning based on local attention
title_full Dense video captioning based on local attention
title_fullStr Dense video captioning based on local attention
title_full_unstemmed Dense video captioning based on local attention
title_short Dense video captioning based on local attention
title_sort dense video captioning based on local attention
topic 2D temporal differential CNN
dense video captioning
event proposal
feature extraction
local attention
url https://doi.org/10.1049/ipr2.12819
work_keys_str_mv AT yongqian densevideocaptioningbasedonlocalattention
AT yingchimao densevideocaptioningbasedonlocalattention
AT zhihaochen densevideocaptioningbasedonlocalattention
AT changli densevideocaptioningbasedonlocalattention
AT olanoteahbloh densevideocaptioningbasedonlocalattention
AT qianhuang densevideocaptioningbasedonlocalattention