Video summarisation by deep visual and categorical diversity

The authors propose a video‐summarisation method based on visual and categorical diversities using pre‐trained deep visual and categorical models. Their method extracts visual and categorical features from a pre‐trained deep convolutional network (DCN) and a pre‐trained word‐embedding matrix. Using...

Full description

Bibliographic Details
Main Authors: Pedro Atencio, Sánchez‐Torres German, John William Branch, Claudio Delrieux
Format: Article
Language:English
Published: Wiley 2019-09-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/iet-cvi.2018.5436
Description
Summary:The authors propose a video‐summarisation method based on visual and categorical diversities using pre‐trained deep visual and categorical models. Their method extracts visual and categorical features from a pre‐trained deep convolutional network (DCN) and a pre‐trained word‐embedding matrix. Using visual and categorical information they obtain a video diversity estimation, which is used as an importance score to select segments from the input video that best describes it. Their method also allows performing queries during the search process, in this way personalising the resulting video summaries according to the particular intended purposes. The performance of the method is evaluated using different pre‐trained DCN models in order to select the architecture with the best throughput. They then compare it with other state‐of‐the‐art proposals in video summarisation using a data‐driven approach with the public dataset SumMe, which contains annotated videos with per‐fragment importance. The results show that their method outperforms other proposals in most of the examples. As an additional advantage, their method requires a simple and direct implementation that does not require a training stage.
ISSN:1751-9632
1751-9640