FreeInit: bridging initialization gap in video diffusion models

Though diffusion-based video generation has witnessed rapid progress, the inference results of existing models still exhibit unsatisfactory temporal consistency and unnatural dynamics. In this paper, we delve deep into the noise initialization of video diffusion models, and discover an implicit...

Full description

Bibliographic Details
Main Authors: Wu, Tianxing, Si, Chenyang, Jiang, Yuming, Huang, Ziqi, Liu, Ziwei
Other Authors: College of Computing and Data Science
Format: Conference Paper
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180265
http://arxiv.org/abs/2312.07537v2
_version_ 1811676615677050880
author Wu, Tianxing
Si, Chenyang
Jiang, Yuming
Huang, Ziqi
Liu, Ziwei
author2 College of Computing and Data Science
author_facet College of Computing and Data Science
Wu, Tianxing
Si, Chenyang
Jiang, Yuming
Huang, Ziqi
Liu, Ziwei
author_sort Wu, Tianxing
collection NTU
description Though diffusion-based video generation has witnessed rapid progress, the inference results of existing models still exhibit unsatisfactory temporal consistency and unnatural dynamics. In this paper, we delve deep into the noise initialization of video diffusion models, and discover an implicit training-inference gap that attributes to the unsatisfactory inference quality.Our key findings are: 1) the spatial-temporal frequency distribution of the initial noise at inference is intrinsically different from that for training, and 2) the denoising process is significantly influenced by the low-frequency components of the initial noise. Motivated by these observations, we propose a concise yet effective inference sampling strategy, FreeInit, which significantly improves temporal consistency of videos generated by diffusion models. Through iteratively refining the spatial-temporal low-frequency components of the initial latent during inference, FreeInit is able to compensate the initialization gap between training and inference, thus effectively improving the subject appearance and temporal consistency of generation results. Extensive experiments demonstrate that FreeInit consistently enhances the generation quality of various text-to-video diffusion models without additional training or fine-tuning.
first_indexed 2024-10-01T02:24:17Z
format Conference Paper
id ntu-10356/180265
institution Nanyang Technological University
language English
last_indexed 2024-10-01T02:24:17Z
publishDate 2024
record_format dspace
spelling ntu-10356/1802652024-09-26T05:39:58Z FreeInit: bridging initialization gap in video diffusion models Wu, Tianxing Si, Chenyang Jiang, Yuming Huang, Ziqi Liu, Ziwei College of Computing and Data Science 2024 European Conference on Computer Vision (ECCV) Computer and Information Science Computer vision Pattern recognition Though diffusion-based video generation has witnessed rapid progress, the inference results of existing models still exhibit unsatisfactory temporal consistency and unnatural dynamics. In this paper, we delve deep into the noise initialization of video diffusion models, and discover an implicit training-inference gap that attributes to the unsatisfactory inference quality.Our key findings are: 1) the spatial-temporal frequency distribution of the initial noise at inference is intrinsically different from that for training, and 2) the denoising process is significantly influenced by the low-frequency components of the initial noise. Motivated by these observations, we propose a concise yet effective inference sampling strategy, FreeInit, which significantly improves temporal consistency of videos generated by diffusion models. Through iteratively refining the spatial-temporal low-frequency components of the initial latent during inference, FreeInit is able to compensate the initialization gap between training and inference, thus effectively improving the subject appearance and temporal consistency of generation results. Extensive experiments demonstrate that FreeInit consistently enhances the generation quality of various text-to-video diffusion models without additional training or fine-tuning. Ministry of Education (MOE) Nanyang Technological University Submitted/Accepted version This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221- 0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). 2024-09-26T01:02:22Z 2024-09-26T01:02:22Z 2024 Conference Paper Wu, T., Si, C., Jiang, Y., Huang, Z. & Liu, Z. (2024). FreeInit: bridging initialization gap in video diffusion models. 2024 European Conference on Computer Vision (ECCV). https://dx.doi.org/10.48550/arXiv.2312.07537 https://hdl.handle.net/10356/180265 10.48550/arXiv.2312.07537 http://arxiv.org/abs/2312.07537v2 en MOET2EP20221- 0012 RIE2020 © 2024 ECCV. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. application/pdf
spellingShingle Computer and Information Science
Computer vision
Pattern recognition
Wu, Tianxing
Si, Chenyang
Jiang, Yuming
Huang, Ziqi
Liu, Ziwei
FreeInit: bridging initialization gap in video diffusion models
title FreeInit: bridging initialization gap in video diffusion models
title_full FreeInit: bridging initialization gap in video diffusion models
title_fullStr FreeInit: bridging initialization gap in video diffusion models
title_full_unstemmed FreeInit: bridging initialization gap in video diffusion models
title_short FreeInit: bridging initialization gap in video diffusion models
title_sort freeinit bridging initialization gap in video diffusion models
topic Computer and Information Science
Computer vision
Pattern recognition
url https://hdl.handle.net/10356/180265
http://arxiv.org/abs/2312.07537v2
work_keys_str_mv AT wutianxing freeinitbridginginitializationgapinvideodiffusionmodels
AT sichenyang freeinitbridginginitializationgapinvideodiffusionmodels
AT jiangyuming freeinitbridginginitializationgapinvideodiffusionmodels
AT huangziqi freeinitbridginginitializationgapinvideodiffusionmodels
AT liuziwei freeinitbridginginitializationgapinvideodiffusionmodels