Environment-Aware Knowledge Distillation for Improved Resource-Constrained Edge Speech Recognition
Recent advances in self-supervised learning have allowed automatic speech recognition (ASR) systems to achieve state-of-the-art (SOTA) word error rates (WER) while requiring only a fraction of the labeled data needed by its predecessors. Notwithstanding, while such models achieve SOTA results in mat...
Main Authors: | Arthur Pimentel, Heitor R. Guimarães, Anderson Avila, Tiago H. Falk |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-11-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/13/23/12571 |
Similar Items
-
Task-specific speech enhancement and data augmentation for improved multimodal emotion recognition under noisy conditions
by: Shruti Kshirsagar, et al.
Published: (2023-03-01) -
Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems
by: Yoshitomo Matsubara, et al.
Published: (2020-01-01) -
Context awareness in the electronic format of dialogue
by: S. V. Pervukhina
Published: (2023-04-01) -
Speech Communication
by: Paul, A. P., et al.
Published: (2010) -
Speech Enhancement Using Dynamic Learning in Knowledge Distillation via Reinforcement Learning
by: Shih-Chuan Chu, et al.
Published: (2023-01-01)