Lip Reading Sentences in the Wild

The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem – unconstrained natural language sentenc...

Full description

Bibliographic Details
Main Authors: Chung, J, Senior, A, Vinyals, O, Zisserman, A
Format: Conference item
Published: Institute of Electrical and Electronics Engineers 2017
_version_ 1826284807853703168
author Chung, J
Senior, A
Vinyals, O
Zisserman, A
author_facet Chung, J
Senior, A
Vinyals, O
Zisserman, A
author_sort Chung, J
collection OXFORD
description The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem – unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a Watch, Listen, Attend and Spell (WLAS) network that learns to transcribe videos of mouth motion to characters, (2) a curriculum learning strategy to accelerate training and to reduce overfitting, (3) a Lip Reading Sentences (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that if audio is available, then visual information helps to improve speech recognition performance.
first_indexed 2024-03-07T01:19:24Z
format Conference item
id oxford-uuid:8fd265a4-430b-464c-902c-8fc2c3dcd33a
institution University of Oxford
last_indexed 2024-03-07T01:19:24Z
publishDate 2017
publisher Institute of Electrical and Electronics Engineers
record_format dspace
spelling oxford-uuid:8fd265a4-430b-464c-902c-8fc2c3dcd33a2022-03-26T23:07:10ZLip Reading Sentences in the WildConference itemhttp://purl.org/coar/resource_type/c_5794uuid:8fd265a4-430b-464c-902c-8fc2c3dcd33aSymplectic Elements at OxfordInstitute of Electrical and Electronics Engineers2017Chung, JSenior, AVinyals, OZisserman, AThe goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem – unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a Watch, Listen, Attend and Spell (WLAS) network that learns to transcribe videos of mouth motion to characters, (2) a curriculum learning strategy to accelerate training and to reduce overfitting, (3) a Lip Reading Sentences (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that if audio is available, then visual information helps to improve speech recognition performance.
spellingShingle Chung, J
Senior, A
Vinyals, O
Zisserman, A
Lip Reading Sentences in the Wild
title Lip Reading Sentences in the Wild
title_full Lip Reading Sentences in the Wild
title_fullStr Lip Reading Sentences in the Wild
title_full_unstemmed Lip Reading Sentences in the Wild
title_short Lip Reading Sentences in the Wild
title_sort lip reading sentences in the wild
work_keys_str_mv AT chungj lipreadingsentencesinthewild
AT seniora lipreadingsentencesinthewild
AT vinyalso lipreadingsentencesinthewild
AT zissermana lipreadingsentencesinthewild