Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data Augmentation

The current data scarcity problem in EEG-based emotion recognition tasks leads to difficulty in building high-precision models using existing deep learning methods. To tackle this problem, a dual encoder variational autoencoder-generative adversarial network (DEVAE-GAN) incorporating spatiotemporal...

Full description

Bibliographic Details
Main Authors: Chenxi Tian, Yuliang Ma, Jared Cammon, Feng Fang, Yingchun Zhang, Ming Meng
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Transactions on Neural Systems and Rehabilitation Engineering
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10102265/
_version_ 1797805188066574336
author Chenxi Tian
Yuliang Ma
Jared Cammon
Feng Fang
Yingchun Zhang
Ming Meng
author_facet Chenxi Tian
Yuliang Ma
Jared Cammon
Feng Fang
Yingchun Zhang
Ming Meng
author_sort Chenxi Tian
collection DOAJ
description The current data scarcity problem in EEG-based emotion recognition tasks leads to difficulty in building high-precision models using existing deep learning methods. To tackle this problem, a dual encoder variational autoencoder-generative adversarial network (DEVAE-GAN) incorporating spatiotemporal features is proposed to generate high-quality artificial samples. First, EEG data for different emotions are preprocessed as differential entropy features under five frequency bands and divided into segments with a 5s time window. Secondly, each feature segment is processed in two forms: the temporal morphology data and the spatial morphology data distributed according to the electrode position. Finally, the proposed dual encoder is trained to extract information from these two features, concatenate the two pieces of information as latent variables, and feed them into the decoder to generate artificial samples. To evaluate the effectiveness, a systematic experimental study was conducted in this work on the SEED dataset. First, the original training dataset is augmented with different numbers of generated samples; then, the augmented training datasets are used to train the deep neural network to construct the sentiment model. The results show that the augmented datasets generated by the proposed method have an average accuracy of 97.21% on all subjects, which is a 5% improvement compared to the original dataset, and the similarity between the generated data and the original data distribution is proved. These results demonstrate that our proposed model can effectively learn the distribution of raw data to generate high-quality artificial samples, which can effectively train a high-precision affective model.
first_indexed 2024-03-13T05:48:16Z
format Article
id doaj.art-ddb7cf71ca7d40aa809fef13e18571fd
institution Directory Open Access Journal
issn 1558-0210
language English
last_indexed 2024-03-13T05:48:16Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Transactions on Neural Systems and Rehabilitation Engineering
spelling doaj.art-ddb7cf71ca7d40aa809fef13e18571fd2023-06-13T20:07:53ZengIEEEIEEE Transactions on Neural Systems and Rehabilitation Engineering1558-02102023-01-01312018202710.1109/TNSRE.2023.326681010102265Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data AugmentationChenxi Tian0Yuliang Ma1https://orcid.org/0000-0003-1277-4663Jared Cammon2https://orcid.org/0000-0002-7054-9782Feng Fang3Yingchun Zhang4https://orcid.org/0000-0002-1927-4103Ming Meng5https://orcid.org/0000-0002-1976-4794School of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang, ChinaSchool of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang, ChinaDepartment of Biomedical Engineering, University of Houston, Houston, TX, USADepartment of Biomedical Engineering, University of Houston, Houston, TX, USADepartment of Biomedical Engineering, University of Houston, Houston, TX, USASchool of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang, ChinaThe current data scarcity problem in EEG-based emotion recognition tasks leads to difficulty in building high-precision models using existing deep learning methods. To tackle this problem, a dual encoder variational autoencoder-generative adversarial network (DEVAE-GAN) incorporating spatiotemporal features is proposed to generate high-quality artificial samples. First, EEG data for different emotions are preprocessed as differential entropy features under five frequency bands and divided into segments with a 5s time window. Secondly, each feature segment is processed in two forms: the temporal morphology data and the spatial morphology data distributed according to the electrode position. Finally, the proposed dual encoder is trained to extract information from these two features, concatenate the two pieces of information as latent variables, and feed them into the decoder to generate artificial samples. To evaluate the effectiveness, a systematic experimental study was conducted in this work on the SEED dataset. First, the original training dataset is augmented with different numbers of generated samples; then, the augmented training datasets are used to train the deep neural network to construct the sentiment model. The results show that the augmented datasets generated by the proposed method have an average accuracy of 97.21% on all subjects, which is a 5% improvement compared to the original dataset, and the similarity between the generated data and the original data distribution is proved. These results demonstrate that our proposed model can effectively learn the distribution of raw data to generate high-quality artificial samples, which can effectively train a high-precision affective model.https://ieeexplore.ieee.org/document/10102265/Emotion recognitionelectroencephalogramdual-encodervariational autoencoder-generative adversarial networkdata augmentation
spellingShingle Chenxi Tian
Yuliang Ma
Jared Cammon
Feng Fang
Yingchun Zhang
Ming Meng
Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data Augmentation
IEEE Transactions on Neural Systems and Rehabilitation Engineering
Emotion recognition
electroencephalogram
dual-encoder
variational autoencoder-generative adversarial network
data augmentation
title Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data Augmentation
title_full Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data Augmentation
title_fullStr Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data Augmentation
title_full_unstemmed Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data Augmentation
title_short Dual-Encoder VAE-GAN With Spatiotemporal Features for Emotional EEG Data Augmentation
title_sort dual encoder vae gan with spatiotemporal features for emotional eeg data augmentation
topic Emotion recognition
electroencephalogram
dual-encoder
variational autoencoder-generative adversarial network
data augmentation
url https://ieeexplore.ieee.org/document/10102265/
work_keys_str_mv AT chenxitian dualencodervaeganwithspatiotemporalfeaturesforemotionaleegdataaugmentation
AT yuliangma dualencodervaeganwithspatiotemporalfeaturesforemotionaleegdataaugmentation
AT jaredcammon dualencodervaeganwithspatiotemporalfeaturesforemotionaleegdataaugmentation
AT fengfang dualencodervaeganwithspatiotemporalfeaturesforemotionaleegdataaugmentation
AT yingchunzhang dualencodervaeganwithspatiotemporalfeaturesforemotionaleegdataaugmentation
AT mingmeng dualencodervaeganwithspatiotemporalfeaturesforemotionaleegdataaugmentation