Classification of Motor-Imagery Tasks Using a Large EEG Dataset by Fusing Classifiers Learning on Wavelet-Scattering Features

Brain-computer or brain-machine interface technology allows humans to control machines using their thoughts via brain signals. In particular, these interfaces can assist people with neurological diseases for speech understanding or physical disabilities for operating devices such as wheelchairs. Mot...

Full description

Bibliographic Details
Main Author: Tuan D. Pham
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Transactions on Neural Systems and Rehabilitation Engineering
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10032556/
Description
Summary:Brain-computer or brain-machine interface technology allows humans to control machines using their thoughts via brain signals. In particular, these interfaces can assist people with neurological diseases for speech understanding or physical disabilities for operating devices such as wheelchairs. Motor-imagery tasks play a basic role in brain-computer interfaces. This study introduces an approach for classifying motor-imagery tasks in a brain-computer interface environment, which remains a challenge for rehabilitation technology using electroencephalogram sensors. Methods used and developed for addressing the classification include wavelet time and image scattering networks, fuzzy recurrence plots, support vector machines, and classifier fusion. The rationale for combining outputs from two classifiers learning on wavelet-time and wavelet-image scattering features of brain signals, respectively, is that they are complementary and can be effectively fused using a novel fuzzy rule-based system. A large-scale challenging electroencephalogram dataset of motor imagery-based brain-computer interface was used to test the efficacy of the proposed approach. Experimental results obtained from within-session classification show the potential application of the new model that achieves an improvement of 7% in classification accuracy over the best existing classifier using state-of-the-art artificial intelligence (76% versus 69%, respectively). For the cross-session experiment, which imposes a more challenging and practical classification task, the proposed fusion model improves the accuracy by 11% (54% versus 65%). The technical novelty presented herein and its further exploration are promising for developing a reliable sensor-based intervention for assisting people with neurodisability to improve their quality of life.
ISSN:1558-0210