Two dissimilarity measures for HMMs and their application in phoneme model clustering
This paper introduces two approximations of the Kullback-Leibler divergence for hidden Markov models (HMMs). The first one is a generalization of an approximation originally presented for HMMs with discrete observation densities. In that case, the HMMs are assumed to be ergodic and the topologies si...
Main Authors: | , , , , |
---|---|
Format: | Conference item |
Published: |
2002
|
Summary: | This paper introduces two approximations of the Kullback-Leibler divergence for hidden Markov models (HMMs). The first one is a generalization of an approximation originally presented for HMMs with discrete observation densities. In that case, the HMMs are assumed to be ergodic and the topologies similar. The second one is a modification of the first one. The topologies of HMMs are assumed to be left-to-right with no skips but the models can have different number of states unlike in the first approximation. Both measures can be presented in a closed form in the case of HMMs with Gaussian (single-mixture) observation densities. The proposed dissimilarity measures were experimented in clustering of acoustic phoneme models for the purposes of multilingual speech recognition. The obtained recognizers were compared to both recognition system based on previously presented dissimilarity measure and one based on phonetic knowledge. The performance of the multilingual recognizers was evaluated in the task of speaker independent isolated word recognition. Small differences were observed in the recognition accuracy of the multilingual recognizers. However, the computational cost of the proposed methods are significantly lower. |
---|