Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities

A new kind of sequence–to–sequence model called a transformer has been applied to electroencephalogram (EEG) systems. However, the majority of EEG–based transformer models have applied attention mechanisms to the temporal domain, while the connectivity between brain...

Full description

Bibliographic Details
Main Authors: Hyung-Ju Ahn, Dae-Hyeok Lee, Ji-Hoon Jeong, Seong-Whan Lee
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Transactions on Neural Systems and Rehabilitation Engineering
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9987523/
_version_ 1797805108276232192
author Hyung-Ju Ahn
Dae-Hyeok Lee
Ji-Hoon Jeong
Seong-Whan Lee
author_facet Hyung-Ju Ahn
Dae-Hyeok Lee
Ji-Hoon Jeong
Seong-Whan Lee
author_sort Hyung-Ju Ahn
collection DOAJ
description A new kind of sequence–to–sequence model called a transformer has been applied to electroencephalogram (EEG) systems. However, the majority of EEG–based transformer models have applied attention mechanisms to the temporal domain, while the connectivity between brain regions and the relationship between different frequencies have been neglected. In addition, many related studies on imagery–based brain–computer interface (BCI) have been limited to classifying EEG signals within one type of imagery. Therefore, it is important to develop a general model to learn various types of neural representations. In this study, we designed an experimental paradigm based on motor imagery, visual imagery, and speech imagery tasks to interpret the neural representations during mental imagery in different modalities. We conducted EEG source localization to investigate the brain networks. In addition, we propose the multiscale convolutional transformer for decoding mental imagery, which applies multi–head attention over the spatial, spectral, and temporal domains. The proposed network shows promising performance with 0.62, 0.70, and 0.72 mental imagery accuracy with the private EEG dataset, BCI competition IV 2a dataset, and Arizona State University dataset, respectively, as compared to the conventional deep learning models. Hence, we believe that it will contribute significantly to overcoming the limited number of classes and low classification performances in the BCI system.
first_indexed 2024-03-13T05:46:03Z
format Article
id doaj.art-b09cce9a4e954be7ba492c4a03a24af4
institution Directory Open Access Journal
issn 1558-0210
language English
last_indexed 2024-03-13T05:46:03Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Transactions on Neural Systems and Rehabilitation Engineering
spelling doaj.art-b09cce9a4e954be7ba492c4a03a24af42023-06-13T20:09:43ZengIEEEIEEE Transactions on Neural Systems and Rehabilitation Engineering1558-02102023-01-013164665610.1109/TNSRE.2022.32293309987523Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different ModalitiesHyung-Ju Ahn0https://orcid.org/0000-0002-2504-8946Dae-Hyeok Lee1https://orcid.org/0000-0002-2238-8910Ji-Hoon Jeong2https://orcid.org/0000-0001-6940-2700Seong-Whan Lee3https://orcid.org/0000-0002-6249-4996Department of Brain and Cognitive Engineering, Korea University, Seongbuk, Seoul, South KoreaDepartment of Brain and Cognitive Engineering, Korea University, Seongbuk, Seoul, South KoreaSchool of Computer Science, Chungbuk National University, Seowon, Cheongju, South KoreaDepartment of Artificial Intelligence, Korea University, Seongbuk, Seoul, South KoreaA new kind of sequence–to–sequence model called a transformer has been applied to electroencephalogram (EEG) systems. However, the majority of EEG–based transformer models have applied attention mechanisms to the temporal domain, while the connectivity between brain regions and the relationship between different frequencies have been neglected. In addition, many related studies on imagery–based brain–computer interface (BCI) have been limited to classifying EEG signals within one type of imagery. Therefore, it is important to develop a general model to learn various types of neural representations. In this study, we designed an experimental paradigm based on motor imagery, visual imagery, and speech imagery tasks to interpret the neural representations during mental imagery in different modalities. We conducted EEG source localization to investigate the brain networks. In addition, we propose the multiscale convolutional transformer for decoding mental imagery, which applies multi–head attention over the spatial, spectral, and temporal domains. The proposed network shows promising performance with 0.62, 0.70, and 0.72 mental imagery accuracy with the private EEG dataset, BCI competition IV 2a dataset, and Arizona State University dataset, respectively, as compared to the conventional deep learning models. Hence, we believe that it will contribute significantly to overcoming the limited number of classes and low classification performances in the BCI system.https://ieeexplore.ieee.org/document/9987523/Brain-computer interfaceelectroencephalogrammental imagerytransformerself-attention
spellingShingle Hyung-Ju Ahn
Dae-Hyeok Lee
Ji-Hoon Jeong
Seong-Whan Lee
Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities
IEEE Transactions on Neural Systems and Rehabilitation Engineering
Brain-computer interface
electroencephalogram
mental imagery
transformer
self-attention
title Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities
title_full Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities
title_fullStr Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities
title_full_unstemmed Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities
title_short Multiscale Convolutional Transformer for EEG Classification of Mental Imagery in Different Modalities
title_sort multiscale convolutional transformer for eeg classification of mental imagery in different modalities
topic Brain-computer interface
electroencephalogram
mental imagery
transformer
self-attention
url https://ieeexplore.ieee.org/document/9987523/
work_keys_str_mv AT hyungjuahn multiscaleconvolutionaltransformerforeegclassificationofmentalimageryindifferentmodalities
AT daehyeoklee multiscaleconvolutionaltransformerforeegclassificationofmentalimageryindifferentmodalities
AT jihoonjeong multiscaleconvolutionaltransformerforeegclassificationofmentalimageryindifferentmodalities
AT seongwhanlee multiscaleconvolutionaltransformerforeegclassificationofmentalimageryindifferentmodalities