Coherent video generation for multiple hand-held cameras with dynamic foreground

Abstract For many social events such as public performances, multiple hand-held cameras may capture the same event. This footage is often collected by amateur cinematographers who typically have little control over the scene and may not pay close attention to the camera. For these reasons, each indi...

Full description

Bibliographic Details
Main Authors: Fang-Lue Zhang, Connelly Barnes, Hao-Tian Zhang, Junhong Zhao, Gabriel Salas
Format: Article
Language:English
Published: SpringerOpen 2020-09-01
Series:Computational Visual Media
Subjects:
Online Access:https://doi.org/10.1007/s41095-020-0187-3
_version_ 1818732817428774912
author Fang-Lue Zhang
Connelly Barnes
Hao-Tian Zhang
Junhong Zhao
Gabriel Salas
author_facet Fang-Lue Zhang
Connelly Barnes
Hao-Tian Zhang
Junhong Zhao
Gabriel Salas
author_sort Fang-Lue Zhang
collection DOAJ
description Abstract For many social events such as public performances, multiple hand-held cameras may capture the same event. This footage is often collected by amateur cinematographers who typically have little control over the scene and may not pay close attention to the camera. For these reasons, each individually captured video may fail to cover the whole time of the event, or may lose track of interesting foreground content such as a performer. We introduce a new algorithm that can synthesize a single smooth video sequence of moving foreground objects captured by multiple hand-held cameras. This allows later viewers to gain a cohesive narrative experience that can transition between different cameras, even though the input footage may be less than ideal. We first introduce a graph-based method for selecting a good transition route. This allows us to automatically select good cut points for the hand-held videos, so that smooth transitions can be created between the resulting video shots. We also propose a method to synthesize a smooth photorealistic transition video between each pair of hand-held cameras, which preserves dynamic foreground content during this transition. Our experiments demonstrate that our method outperforms previous state-of-the-art methods, which struggle to preserve dynamic foreground content.
first_indexed 2024-12-17T23:39:36Z
format Article
id doaj.art-8b47411a758946f397c788d90f4d713b
institution Directory Open Access Journal
issn 2096-0433
2096-0662
language English
last_indexed 2024-12-17T23:39:36Z
publishDate 2020-09-01
publisher SpringerOpen
record_format Article
series Computational Visual Media
spelling doaj.art-8b47411a758946f397c788d90f4d713b2022-12-21T21:28:29ZengSpringerOpenComputational Visual Media2096-04332096-06622020-09-016329130610.1007/s41095-020-0187-3Coherent video generation for multiple hand-held cameras with dynamic foregroundFang-Lue Zhang0Connelly Barnes1Hao-Tian Zhang2Junhong Zhao3Gabriel Salas4Victoria University of WellingtonAdobe ResearchStanford UniversityVictoria University of WellingtonVictoria University of WellingtonAbstract For many social events such as public performances, multiple hand-held cameras may capture the same event. This footage is often collected by amateur cinematographers who typically have little control over the scene and may not pay close attention to the camera. For these reasons, each individually captured video may fail to cover the whole time of the event, or may lose track of interesting foreground content such as a performer. We introduce a new algorithm that can synthesize a single smooth video sequence of moving foreground objects captured by multiple hand-held cameras. This allows later viewers to gain a cohesive narrative experience that can transition between different cameras, even though the input footage may be less than ideal. We first introduce a graph-based method for selecting a good transition route. This allows us to automatically select good cut points for the hand-held videos, so that smooth transitions can be created between the resulting video shots. We also propose a method to synthesize a smooth photorealistic transition video between each pair of hand-held cameras, which preserves dynamic foreground content during this transition. Our experiments demonstrate that our method outperforms previous state-of-the-art methods, which struggle to preserve dynamic foreground content.https://doi.org/10.1007/s41095-020-0187-3video editingsmooth temporal transitionsdynamic foregroundmultiple camerashand-held cameras
spellingShingle Fang-Lue Zhang
Connelly Barnes
Hao-Tian Zhang
Junhong Zhao
Gabriel Salas
Coherent video generation for multiple hand-held cameras with dynamic foreground
Computational Visual Media
video editing
smooth temporal transitions
dynamic foreground
multiple cameras
hand-held cameras
title Coherent video generation for multiple hand-held cameras with dynamic foreground
title_full Coherent video generation for multiple hand-held cameras with dynamic foreground
title_fullStr Coherent video generation for multiple hand-held cameras with dynamic foreground
title_full_unstemmed Coherent video generation for multiple hand-held cameras with dynamic foreground
title_short Coherent video generation for multiple hand-held cameras with dynamic foreground
title_sort coherent video generation for multiple hand held cameras with dynamic foreground
topic video editing
smooth temporal transitions
dynamic foreground
multiple cameras
hand-held cameras
url https://doi.org/10.1007/s41095-020-0187-3
work_keys_str_mv AT fangluezhang coherentvideogenerationformultiplehandheldcameraswithdynamicforeground
AT connellybarnes coherentvideogenerationformultiplehandheldcameraswithdynamicforeground
AT haotianzhang coherentvideogenerationformultiplehandheldcameraswithdynamicforeground
AT junhongzhao coherentvideogenerationformultiplehandheldcameraswithdynamicforeground
AT gabrielsalas coherentvideogenerationformultiplehandheldcameraswithdynamicforeground