The conversation: deep audio-visual speech enhancement

Our goal is to isolate individual speakers from multi-talker simultaneous speech in videos. Existing works in this area have focussed on trying to separate utterances from known speakers in controlled environments. In this paper, we propose a deep audio-visual speech enhancement network that is able...

Full description

Bibliographic Details
Main Authors: Alfouras, T, Chung, JS, Zisserman, A
Format: Conference item
Published: International Speech Communication Association 2018
Description
Summary:Our goal is to isolate individual speakers from multi-talker simultaneous speech in videos. Existing works in this area have focussed on trying to separate utterances from known speakers in controlled environments. In this paper, we propose a deep audio-visual speech enhancement network that is able to separate a speaker's voice given lip regions in the corresponding video, by predicting both the magnitude and the phase of the target signal. The method is applicable to speakers unheard and unseen during training and for unconstrained environments. We demonstrate strong quantitative and qualitative results, isolating extremely challenging real-world examples.