Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification

In this work, we focus on silent speech recognition in electroencephalography (EEG) data of healthy individuals to advance brain–computer interface (BCI) development to include people with neurodegeneration and movement and communication difficulties in society. Our dataset was recorded from 270 hea...

Full description

Bibliographic Details
Main Authors: Darya Vorontsova, Ivan Menshikov, Aleksandr Zubov, Kirill Orlov, Peter Rikunov, Ekaterina Zvereva, Lev Flitman, Anton Lanikin, Anna Sokolova, Sergey Markov, Alexandra Bernadotte
Format: Article
Language:English
Published: MDPI AG 2021-10-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/20/6744
_version_ 1797513206024896512
author Darya Vorontsova
Ivan Menshikov
Aleksandr Zubov
Kirill Orlov
Peter Rikunov
Ekaterina Zvereva
Lev Flitman
Anton Lanikin
Anna Sokolova
Sergey Markov
Alexandra Bernadotte
author_facet Darya Vorontsova
Ivan Menshikov
Aleksandr Zubov
Kirill Orlov
Peter Rikunov
Ekaterina Zvereva
Lev Flitman
Anton Lanikin
Anna Sokolova
Sergey Markov
Alexandra Bernadotte
author_sort Darya Vorontsova
collection DOAJ
description In this work, we focus on silent speech recognition in electroencephalography (EEG) data of healthy individuals to advance brain–computer interface (BCI) development to include people with neurodegeneration and movement and communication difficulties in society. Our dataset was recorded from 270 healthy subjects during silent speech of eight different Russia words (commands): ‘forward’, ‘backward’, ‘up’, ‘down’, ‘help’, ‘take’, ‘stop’, and ‘release’, and one pseudoword. We began by demonstrating that silent word distributions can be very close statistically and that there are words describing directed movements that share similar patterns of brain activity. However, after training one individual, we achieved 85% accuracy performing 9 words (including pseudoword) classification and 88% accuracy on binary classification on average. We show that a smaller dataset collected on one participant allows for building a more accurate classifier for a given subject than a larger dataset collected on a group of people. At the same time, we show that the learning outcomes on a limited sample of EEG-data are transferable to the general population. Thus, we demonstrate the possibility of using selected command-words to create an EEG-based input device for people on whom the neural network classifier has not been trained, which is particularly important for people with disabilities.
first_indexed 2024-03-10T06:13:24Z
format Article
id doaj.art-78a24a27a16544dc88abf17077960b33
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T06:13:24Z
publishDate 2021-10-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-78a24a27a16544dc88abf17077960b332023-11-22T19:56:46ZengMDPI AGSensors1424-82202021-10-012120674410.3390/s21206744Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words ClassificationDarya Vorontsova0Ivan Menshikov1Aleksandr Zubov2Kirill Orlov3Peter Rikunov4Ekaterina Zvereva5Lev Flitman6Anton Lanikin7Anna Sokolova8Sergey Markov9Alexandra Bernadotte10Experimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaFaculty of Mechanics and Mathematics, Moscow State University, GSP-1, 1 Leninskiye Gory, Main Building, 119991 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaResearch Center of Endovascular Neurosurgery, Federal State Budgetary Institution “Federal Center of Brain Research and Neurotechnologies” of the Federal Medical Biological Agency, Ostrovityanova Street, 1, p. 10, 117997 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaExperimental ML Systems Subdivision, SberDevices Department, PJSC Sberbank, 121165 Moscow, RussiaIn this work, we focus on silent speech recognition in electroencephalography (EEG) data of healthy individuals to advance brain–computer interface (BCI) development to include people with neurodegeneration and movement and communication difficulties in society. Our dataset was recorded from 270 healthy subjects during silent speech of eight different Russia words (commands): ‘forward’, ‘backward’, ‘up’, ‘down’, ‘help’, ‘take’, ‘stop’, and ‘release’, and one pseudoword. We began by demonstrating that silent word distributions can be very close statistically and that there are words describing directed movements that share similar patterns of brain activity. However, after training one individual, we achieved 85% accuracy performing 9 words (including pseudoword) classification and 88% accuracy on binary classification on average. We show that a smaller dataset collected on one participant allows for building a more accurate classifier for a given subject than a larger dataset collected on a group of people. At the same time, we show that the learning outcomes on a limited sample of EEG-data are transferable to the general population. Thus, we demonstrate the possibility of using selected command-words to create an EEG-based input device for people on whom the neural network classifier has not been trained, which is particularly important for people with disabilities.https://www.mdpi.com/1424-8220/21/20/6744brain–computer interfaceneurorehabilitationneurodegenerationneurodegeneration treatmentsenescenceeSports
spellingShingle Darya Vorontsova
Ivan Menshikov
Aleksandr Zubov
Kirill Orlov
Peter Rikunov
Ekaterina Zvereva
Lev Flitman
Anton Lanikin
Anna Sokolova
Sergey Markov
Alexandra Bernadotte
Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification
Sensors
brain–computer interface
neurorehabilitation
neurodegeneration
neurodegeneration treatment
senescence
eSports
title Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification
title_full Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification
title_fullStr Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification
title_full_unstemmed Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification
title_short Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification
title_sort silent eeg speech recognition using convolutional and recurrent neural network with 85 accuracy of 9 words classification
topic brain–computer interface
neurorehabilitation
neurodegeneration
neurodegeneration treatment
senescence
eSports
url https://www.mdpi.com/1424-8220/21/20/6744
work_keys_str_mv AT daryavorontsova silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT ivanmenshikov silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT aleksandrzubov silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT kirillorlov silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT peterrikunov silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT ekaterinazvereva silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT levflitman silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT antonlanikin silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT annasokolova silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT sergeymarkov silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification
AT alexandrabernadotte silenteegspeechrecognitionusingconvolutionalandrecurrentneuralnetworkwith85accuracyof9wordsclassification