YouMVOS: an actor-centric multi-shot video object segmentation dataset

Many video understanding tasks require analyzing multishot videos, but existing datasets for video object segmentation (VOS) only consider single-shot videos. To address this challenge, we collected a new dataset-YouMVaS-of 200 popular YouTube videos spanning ten genres, where each video is on avera...

Full description

Bibliographic Details
Main Authors: Wei, D, Kharbanda, S, Arora, S, Roy, R, Jain, N, Palrecha, A, Shah, T, Mathur, S, Mathur, R, Kemkar, A, Chakravarthy, A, Lin, Z, Jang, W-D, Tang, Y, Bai, S, Tompkin, J, Torr, PHS, Pfister, H
Format: Conference item
Language:English
Published: IEEE 2022
Description
Summary:Many video understanding tasks require analyzing multishot videos, but existing datasets for video object segmentation (VOS) only consider single-shot videos. To address this challenge, we collected a new dataset-YouMVaS-of 200 popular YouTube videos spanning ten genres, where each video is on average five minutes long and with 75 shots. We selected recurring actors and annotated 431K segmentation masks at a frame rate of six, exceeding previous datasets in average video duration, object variation, and narrative structure complexity. We incorporated good practices of model architecture design, memory management, and multi-shot tracking into an existing video segmentation method to build competitive baseline methods. Through error analysis, we found that these baselines still fail to cope with cross-shot appearance variation on our YouMVOS dataset. Thus, our dataset poses new challenges in multi-shot segmentation towards better video analysis. Data, code, and pre-trained models are available at https://donglaiw.github.io/proj/youMVOS