Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification

Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain–computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration...

Full description

Bibliographic Details
Main Authors: Taweesak Emsawas, Takashi Morita, Tsukasa Kimura, Ken-ichi Fukui, Masayuki Numao
Format: Article
Language:English
Published: MDPI AG 2022-10-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/22/21/8250
_version_ 1797466514125750272
author Taweesak Emsawas
Takashi Morita
Tsukasa Kimura
Ken-ichi Fukui
Masayuki Numao
author_facet Taweesak Emsawas
Takashi Morita
Tsukasa Kimura
Ken-ichi Fukui
Masayuki Numao
author_sort Taweesak Emsawas
collection DOAJ
description Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain–computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model’s learning capacity.
first_indexed 2024-03-09T18:39:54Z
format Article
id doaj.art-2c95f71c147b4482af56f26783200594
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-09T18:39:54Z
publishDate 2022-10-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-2c95f71c147b4482af56f267832005942023-11-24T06:45:05ZengMDPI AGSensors1424-82202022-10-012221825010.3390/s22218250Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion ClassificationTaweesak Emsawas0Takashi Morita1Tsukasa Kimura2Ken-ichi Fukui3Masayuki Numao4Graduate School of Information Science and Technology, Osaka University, Osaka 565-0871, JapanThe Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, JapanThe Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, JapanThe Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, JapanThe Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, JapanDeep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain–computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model’s learning capacity.https://www.mdpi.com/1424-8220/22/21/8250brain–computer interface (BCI)electroencephalography (EEG)emotion classificationmachine learningconvolutional neural network (ConvNet)
spellingShingle Taweesak Emsawas
Takashi Morita
Tsukasa Kimura
Ken-ichi Fukui
Masayuki Numao
Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
Sensors
brain–computer interface (BCI)
electroencephalography (EEG)
emotion classification
machine learning
convolutional neural network (ConvNet)
title Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_full Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_fullStr Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_full_unstemmed Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_short Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_sort multi kernel temporal and spatial convolution for eeg based emotion classification
topic brain–computer interface (BCI)
electroencephalography (EEG)
emotion classification
machine learning
convolutional neural network (ConvNet)
url https://www.mdpi.com/1424-8220/22/21/8250
work_keys_str_mv AT taweesakemsawas multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT takashimorita multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT tsukasakimura multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT kenichifukui multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT masayukinumao multikerneltemporalandspatialconvolutionforeegbasedemotionclassification