Modeling continuous visual speech using boosted viseme models
In this paper, a novel connected-viseme approach for modeling continuous visual speech is presented. The approach adopts AdaBoost-HMMs as the viseme models. Continuous visual speech is modeled by connecting the viseme models using level building algorithm. The approach is applied to identify words...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference Paper |
Language: | English |
Published: |
2009
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/91029 http://hdl.handle.net/10220/6002 |
Summary: | In this paper, a novel connected-viseme approach for modeling continuous visual speech is presented. The approach adopts AdaBoost-HMMs as the viseme models.
Continuous visual speech is modeled by connecting the viseme models using level building algorithm. The approach is applied to identify words and phrases in visual
speech. The recognition results indicate that the proposed method has better performance than the conventional
approach. |
---|