Detection and Separation of Speech Event Using Audio and Video Information Fusion and Its Application to Robust Speech Interface

A method of detecting speech events in a multiple-sound-source condition using audio and video information is proposed. For detecting speech events, sound localization using a microphone array and human tracking by stereo vision is combined by a Bayesian network. From the inference results of the Ba...

Full description

Bibliographic Details
Main Authors: Futoshi Asano, Kiyoshi Yamamoto, Isao Hara, Jun Ogata, Takashi Yoshimura, Yoichi Motomura, Naoyuki Ichimura, Hideki Asoh
Format: Article
Language:English
Published: SpringerOpen 2004-09-01
Series:EURASIP Journal on Advances in Signal Processing
Subjects:
Online Access:http://dx.doi.org/10.1155/S1110865704402303
Description
Summary:A method of detecting speech events in a multiple-sound-source condition using audio and video information is proposed. For detecting speech events, sound localization using a microphone array and human tracking by stereo vision is combined by a Bayesian network. From the inference results of the Bayesian network, information on the time and location of speech events can be known. The information on the detected speech events is then utilized in the robust speech interface. A maximum likelihood adaptive beamformer is employed as a preprocessor of the speech recognizer to separate the speech signal from environmental noise. The coefficients of the beamformer are kept updated based on the information of the speech events. The information on the speech events is also used by the speech recognizer for extracting the speech segment.
ISSN:1687-6172
1687-6180