Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques

Assessing the severity level of dysarthria can provide an insight into the patient’s improvement, assist pathologists to plan therapy, and aid automatic dysarthric speech recognition systems. In this article, we present a comparative study on the classification of dysarthria severity leve...

Full description

Bibliographic Details
Main Authors: Amlu Anna Joshy, Rajeev Rajan
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Transactions on Neural Systems and Rehabilitation Engineering
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9762324/
_version_ 1797805260334432256
author Amlu Anna Joshy
Rajeev Rajan
author_facet Amlu Anna Joshy
Rajeev Rajan
author_sort Amlu Anna Joshy
collection DOAJ
description Assessing the severity level of dysarthria can provide an insight into the patient’s improvement, assist pathologists to plan therapy, and aid automatic dysarthric speech recognition systems. In this article, we present a comparative study on the classification of dysarthria severity levels using different deep learning techniques and acoustic features. First, we evaluate the basic architectural choices such as deep neural network (DNN), convolutional neural network, gated recurrent units and long short-term memory network using the basic speech features, namely, Mel-frequency cepstral coefficients (MFCCs) and constant-Q cepstral coefficients. Next, speech-disorder specific features computed from prosody, articulation, phonation and glottal functioning are evaluated on DNN models. Finally, we explore the utility of low-dimensional feature representation using subspace modeling to give i-vectors, which are then classified using DNN models. Evaluation is done using the standard UA-Speech and TORGO databases. By giving an accuracy of 93.97% under the speaker-dependent scenario and 49.22% under the speaker-independent scenario for the UA-Speech database, the DNN classifier using MFCC-based i-vectors outperforms other systems.
first_indexed 2024-03-13T05:48:22Z
format Article
id doaj.art-f5833a587fb940579f70d5f0a5ead5e6
institution Directory Open Access Journal
issn 1558-0210
language English
last_indexed 2024-03-13T05:48:22Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Transactions on Neural Systems and Rehabilitation Engineering
spelling doaj.art-f5833a587fb940579f70d5f0a5ead5e62023-06-13T20:06:27ZengIEEEIEEE Transactions on Neural Systems and Rehabilitation Engineering1558-02102022-01-01301147115710.1109/TNSRE.2022.31698149762324Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning TechniquesAmlu Anna Joshy0https://orcid.org/0000-0001-9248-2841Rajeev Rajan1Electronics and Communication Engineering Department, College of Engineering Trivandrum, APJ Abdul Kalam Technological University, Thiruvananthapuram, IndiaDepartment of Computer Science and Engineering, Speech and Music Technology Laboratory, IIT Madras, Chennai, IndiaAssessing the severity level of dysarthria can provide an insight into the patient’s improvement, assist pathologists to plan therapy, and aid automatic dysarthric speech recognition systems. In this article, we present a comparative study on the classification of dysarthria severity levels using different deep learning techniques and acoustic features. First, we evaluate the basic architectural choices such as deep neural network (DNN), convolutional neural network, gated recurrent units and long short-term memory network using the basic speech features, namely, Mel-frequency cepstral coefficients (MFCCs) and constant-Q cepstral coefficients. Next, speech-disorder specific features computed from prosody, articulation, phonation and glottal functioning are evaluated on DNN models. Finally, we explore the utility of low-dimensional feature representation using subspace modeling to give i-vectors, which are then classified using DNN models. Evaluation is done using the standard UA-Speech and TORGO databases. By giving an accuracy of 93.97% under the speaker-dependent scenario and 49.22% under the speaker-independent scenario for the UA-Speech database, the DNN classifier using MFCC-based i-vectors outperforms other systems.https://ieeexplore.ieee.org/document/9762324/Deep learningdysarthriai-vectorsseverity assessment
spellingShingle Amlu Anna Joshy
Rajeev Rajan
Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques
IEEE Transactions on Neural Systems and Rehabilitation Engineering
Deep learning
dysarthria
i-vectors
severity assessment
title Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques
title_full Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques
title_fullStr Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques
title_full_unstemmed Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques
title_short Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques
title_sort automated dysarthria severity classification a study on acoustic features and deep learning techniques
topic Deep learning
dysarthria
i-vectors
severity assessment
url https://ieeexplore.ieee.org/document/9762324/
work_keys_str_mv AT amluannajoshy automateddysarthriaseverityclassificationastudyonacousticfeaturesanddeeplearningtechniques
AT rajeevrajan automateddysarthriaseverityclassificationastudyonacousticfeaturesanddeeplearningtechniques