Learning a spatial-temporal texture transformer network for video inpainting
We study video inpainting, which aims to recover realistic textures from damaged frames. Recent progress has been made by taking other frames as references so that relevant textures can be transferred to damaged frames. However, existing video inpainting approaches neglect the ability of the model t...
Main Authors: | Pengsen Ma, Tao Xue |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2022-10-01
|
Series: | Frontiers in Neurorobotics |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnbot.2022.1002453/full |
Similar Items
-
FSTT: Flow-Guided Spatial Temporal Transformer for Deep Video Inpainting
by: Ruixin Liu, et al.
Published: (2023-10-01) -
Coherent Semantic Spatial-Temporal Attention Network for Video Inpainting
by: LIU Lang, LI Liang, DAN Yuan-hong
Published: (2021-10-01) -
Spatio-temporal image inpainting for video applications
by: Voronin Viacheslav, et al.
Published: (2017-01-01) -
Texture Inpainting Using Covariance in Wavelet Domain
by: Biradar Rajkumar L., et al.
Published: (2013-09-01) -
Deep Transformer Based Video Inpainting Using Fast Fourier Tokenization
by: Taewan Kim, et al.
Published: (2024-01-01)