Theoretical learning guarantees applied to acoustic modeling

Abstract In low-resource scenarios, for example, small datasets or a lack in computational resources available, state-of-the-art deep learning methods for speech recognition have been known to fail. It is possible to achieve more robust models if care is taken to ensure the learning guarantees provi...

Full description

Bibliographic Details
Main Authors: Christopher D. Shulby, Martha D. Ferreira, Rodrigo F. de Mello, Sandra M. Aluisio
Format: Article
Language:English
Published: Sociedade Brasileira de Computação 2019-01-01
Series:Journal of the Brazilian Computer Society
Subjects:
Online Access:http://link.springer.com/article/10.1186/s13173-018-0081-3
Description
Summary:Abstract In low-resource scenarios, for example, small datasets or a lack in computational resources available, state-of-the-art deep learning methods for speech recognition have been known to fail. It is possible to achieve more robust models if care is taken to ensure the learning guarantees provided by the statistical learning theory. This work presents a shallow and hybrid approach using a convolutional neural network feature extractor fed into a hierarchical tree of support vector machines for classification. Here, we show that gross errors present even in state-of-the-art systems can be avoided and that an accurate acoustic model can be built in a hierarchical fashion. Furthermore, we present proof that our algorithm does adhere to the learning guarantees provided by the statistical learning theory. The acoustic model produced in this work outperforms traditional hidden Markov models, and the hierarchical support vector machine tree outperforms a multi-class multilayer perceptron classifier using the same features. More importantly, we isolate the performance of the acoustic model and provide results on both the frame and phoneme level, considering the true robustness of the model. We show that even with a small amount of data, accurate and robust recognition rates can be obtained.
ISSN:0104-6500
1678-4804