You said that?: Synthesising talking faces from audio
We describe a method for generating a video of a talking face. The method takes still images of the target face and an audio speech segment as inputs, and generates a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at tr...
Main Authors: | Jamaludin, A, Chung, JS, Zisserman, A |
---|---|
Formato: | Journal article |
Idioma: | English |
Publicado em: |
Springer
2019
|
Registos relacionados
-
You said that?
Por: Chung, JS, J, et al.
Publicado em: (2017) -
The conversation: deep audio-visual speech enhancement
Por: Alfouras, T, et al.
Publicado em: (2018) -
Self-supervised learning of audio-visual objects from video
Por: Afouras, T, et al.
Publicado em: (2020) -
"My encik said" : talking about national service with memes
Por: Tsoi, Wai Yee
Publicado em: (2019) -
My lips are concealed: audio-visual speech enhancement through obstructions
Por: Afouras, T, et al.
Publicado em: (2019)