Face, body, voice: video person-clustering with multiple modalities

The objective of this work is person-clustering in videos – grouping characters according to their identity. Previous methods focus on the narrower task of face-clustering, and for the most part ignore other cues such as the person’s voice, their overall appearance (hair, clothes, posture), and the...

Full description

Bibliographic Details
Main Authors: Brown, A, Kalogeiton, V, Zisserman, A
Format: Conference item
Language:English
Published: IEEE 2021
_version_ 1797081368565383168
author Brown, A
Kalogeiton, V
Zisserman, A
author_facet Brown, A
Kalogeiton, V
Zisserman, A
author_sort Brown, A
collection OXFORD
description The objective of this work is person-clustering in videos – grouping characters according to their identity. Previous methods focus on the narrower task of face-clustering, and for the most part ignore other cues such as the person’s voice, their overall appearance (hair, clothes, posture), and the editing structure of the videos. Similarly, most current datasets evaluate only the task of face-clustering, rather than person-clustering. This limits their applicability to downstream applications such as story understanding which require person-level, rather than only face-level, reasoning.In this paper we make contributions to address both these deficiencies: first, we introduce a Multi-Modal High-Precision Clustering algorithm for person-clustering in videos using cues from several modalities (face, body, and voice). Second, we introduce a Video Person-Clustering dataset, for evaluating multi-modal person-clustering. It contains body-tracks for each annotated character, face-tracks when visible, and voice-tracks when speaking, with their associated features. The dataset is by far the largest of its kind, and covers films and TV-shows representing a wide range of demographics. Finally, we show the effectiveness of using multiple modalities for person-clustering, explore the use of this new broad task for story understanding through character co-occurrences, and achieve a new state of the art on all available datasets for face and person-clustering.
first_indexed 2024-03-07T01:13:29Z
format Conference item
id oxford-uuid:8dd99713-2ec7-4971-a705-639a43ab6110
institution University of Oxford
language English
last_indexed 2024-03-07T01:13:29Z
publishDate 2021
publisher IEEE
record_format dspace
spelling oxford-uuid:8dd99713-2ec7-4971-a705-639a43ab61102022-03-26T22:53:49ZFace, body, voice: video person-clustering with multiple modalitiesConference itemhttp://purl.org/coar/resource_type/c_5794uuid:8dd99713-2ec7-4971-a705-639a43ab6110EnglishSymplectic ElementsIEEE2021Brown, AKalogeiton, VZisserman, AThe objective of this work is person-clustering in videos – grouping characters according to their identity. Previous methods focus on the narrower task of face-clustering, and for the most part ignore other cues such as the person’s voice, their overall appearance (hair, clothes, posture), and the editing structure of the videos. Similarly, most current datasets evaluate only the task of face-clustering, rather than person-clustering. This limits their applicability to downstream applications such as story understanding which require person-level, rather than only face-level, reasoning.In this paper we make contributions to address both these deficiencies: first, we introduce a Multi-Modal High-Precision Clustering algorithm for person-clustering in videos using cues from several modalities (face, body, and voice). Second, we introduce a Video Person-Clustering dataset, for evaluating multi-modal person-clustering. It contains body-tracks for each annotated character, face-tracks when visible, and voice-tracks when speaking, with their associated features. The dataset is by far the largest of its kind, and covers films and TV-shows representing a wide range of demographics. Finally, we show the effectiveness of using multiple modalities for person-clustering, explore the use of this new broad task for story understanding through character co-occurrences, and achieve a new state of the art on all available datasets for face and person-clustering.
spellingShingle Brown, A
Kalogeiton, V
Zisserman, A
Face, body, voice: video person-clustering with multiple modalities
title Face, body, voice: video person-clustering with multiple modalities
title_full Face, body, voice: video person-clustering with multiple modalities
title_fullStr Face, body, voice: video person-clustering with multiple modalities
title_full_unstemmed Face, body, voice: video person-clustering with multiple modalities
title_short Face, body, voice: video person-clustering with multiple modalities
title_sort face body voice video person clustering with multiple modalities
work_keys_str_mv AT browna facebodyvoicevideopersonclusteringwithmultiplemodalities
AT kalogeitonv facebodyvoicevideopersonclusteringwithmultiplemodalities
AT zissermana facebodyvoicevideopersonclusteringwithmultiplemodalities