Video summarisation by deep visual and categorical diversity

The authors propose a video‐summarisation method based on visual and categorical diversities using pre‐trained deep visual and categorical models. Their method extracts visual and categorical features from a pre‐trained deep convolutional network (DCN) and a pre‐trained word‐embedding matrix. Using...

Full description

Bibliographic Details
Main Authors: Pedro Atencio, Sánchez‐Torres German, John William Branch, Claudio Delrieux
Format: Article
Language:English
Published: Wiley 2019-09-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/iet-cvi.2018.5436