Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention

Brain-computer interfaces (BCIs) built based on motor imagery paradigm have found extensive utilization in motor rehabilitation and the control of assistive applications. However, traditional MI-BCI systems often exhibit suboptimal classification performance and require significant time for new user...

Full description

Bibliographic Details
Main Authors: Sun, Hao, Ding, Yi, Bao, Jianzhu, Qin, Ke, Tong, Chengxuan, Jin, Jing, Guan, Cuntai
Other Authors: School of Computer Science and Engineering
Format: Journal Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180824
_version_ 1826129693844176896
author Sun, Hao
Ding, Yi
Bao, Jianzhu
Qin, Ke
Tong, Chengxuan
Jin, Jing
Guan, Cuntai
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Sun, Hao
Ding, Yi
Bao, Jianzhu
Qin, Ke
Tong, Chengxuan
Jin, Jing
Guan, Cuntai
author_sort Sun, Hao
collection NTU
description Brain-computer interfaces (BCIs) built based on motor imagery paradigm have found extensive utilization in motor rehabilitation and the control of assistive applications. However, traditional MI-BCI systems often exhibit suboptimal classification performance and require significant time for new users to collect subject-specific training data. This limitation diminishes the user-friendliness of BCIs and presents significant challenges in developing effective subject-independent models. In response to these challenges, we propose a novel subject-independent framework for learning temporal dependency for motor imagery BCIs by Contrastive Learning and Self-attention (CLS). In CLS model, we incorporate self-attention mechanism and supervised contrastive learning into a deep neural network to extract important information from electroencephalography (EEG) signals as features. We evaluate the CLS model using two large public datasets encompassing numerous subjects in a subject-independent experiment condition. The results demonstrate that CLS outperforms six baseline algorithms, achieving a mean classification accuracy improvement of 1.3 % and 4.71 % than the best algorithm on the Giga dataset and OpenBMI dataset, respectively. Our findings demonstrate that CLS can effectively learn invariant discriminative features from training data obtained from non-target subjects, thus showcasing its potential for building models for new users without the need for calibration.
first_indexed 2025-03-09T15:09:47Z
format Journal Article
id ntu-10356/180824
institution Nanyang Technological University
language English
last_indexed 2025-03-09T15:09:47Z
publishDate 2024
record_format dspace
spelling ntu-10356/1808242024-10-29T01:30:18Z Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention Sun, Hao Ding, Yi Bao, Jianzhu Qin, Ke Tong, Chengxuan Jin, Jing Guan, Cuntai School of Computer Science and Engineering Computer and Information Science Motor imagery Self-attention Brain-computer interfaces (BCIs) built based on motor imagery paradigm have found extensive utilization in motor rehabilitation and the control of assistive applications. However, traditional MI-BCI systems often exhibit suboptimal classification performance and require significant time for new users to collect subject-specific training data. This limitation diminishes the user-friendliness of BCIs and presents significant challenges in developing effective subject-independent models. In response to these challenges, we propose a novel subject-independent framework for learning temporal dependency for motor imagery BCIs by Contrastive Learning and Self-attention (CLS). In CLS model, we incorporate self-attention mechanism and supervised contrastive learning into a deep neural network to extract important information from electroencephalography (EEG) signals as features. We evaluate the CLS model using two large public datasets encompassing numerous subjects in a subject-independent experiment condition. The results demonstrate that CLS outperforms six baseline algorithms, achieving a mean classification accuracy improvement of 1.3 % and 4.71 % than the best algorithm on the Giga dataset and OpenBMI dataset, respectively. Our findings demonstrate that CLS can effectively learn invariant discriminative features from training data obtained from non-target subjects, thus showcasing its potential for building models for new users without the need for calibration. Agency for Science, Technology and Research (A*STAR) This work was supported by the China Scholarship Council (CSC) 202206740012. This work was also supported by the RIE2020 AME Programmatic Fund, Singapore (No. A20G8b0102), in part by STI 2030- major projects 2022ZD0208900 and the Grant National Natural Science Foundation of China under Grant 62176090; in part by Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX, in part by the Program of Introducing Talents of Discipline to Universities through the 111 Project under Grant B17017. This research was also supported by National Government Guided Special Funds for Local Science and Technology Development (Shenzhen, China) (No. 2021Szvup043) and by Project of Jiangsu Province Science and Technology Plan Special Fund in 2022 under Grant BE2022064-1. 2024-10-29T01:30:18Z 2024-10-29T01:30:18Z 2024 Journal Article Sun, H., Ding, Y., Bao, J., Qin, K., Tong, C., Jin, J. & Guan, C. (2024). Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention. Neural Networks, 178, 106470-. https://dx.doi.org/10.1016/j.neunet.2024.106470 0893-6080 https://hdl.handle.net/10356/180824 10.1016/j.neunet.2024.106470 38943861 2-s2.0-85196953895 178 106470 en A20G8b0102 Neural Networks © 2024 Published by Elsevier Ltd. All rights reserved.
spellingShingle Computer and Information Science
Motor imagery
Self-attention
Sun, Hao
Ding, Yi
Bao, Jianzhu
Qin, Ke
Tong, Chengxuan
Jin, Jing
Guan, Cuntai
Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_full Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_fullStr Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_full_unstemmed Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_short Leveraging temporal dependency for cross-subject-MI BCIs by contrastive learning and self-attention
title_sort leveraging temporal dependency for cross subject mi bcis by contrastive learning and self attention
topic Computer and Information Science
Motor imagery
Self-attention
url https://hdl.handle.net/10356/180824
work_keys_str_mv AT sunhao leveragingtemporaldependencyforcrosssubjectmibcisbycontrastivelearningandselfattention
AT dingyi leveragingtemporaldependencyforcrosssubjectmibcisbycontrastivelearningandselfattention
AT baojianzhu leveragingtemporaldependencyforcrosssubjectmibcisbycontrastivelearningandselfattention
AT qinke leveragingtemporaldependencyforcrosssubjectmibcisbycontrastivelearningandselfattention
AT tongchengxuan leveragingtemporaldependencyforcrosssubjectmibcisbycontrastivelearningandselfattention
AT jinjing leveragingtemporaldependencyforcrosssubjectmibcisbycontrastivelearningandselfattention
AT guancuntai leveragingtemporaldependencyforcrosssubjectmibcisbycontrastivelearningandselfattention