Cross-Session Emotion Recognition by Joint Label-Common and Label-Specific EEG Features Exploration

Since Electroencephalogram (EEG) is resistant to camouflage, it has been a reliable data source for objective emotion recognition. EEG is naturally multi-rhythm and multi-channel, based on which we can extract multiple features for further processing. In EEG-based emotion recognition, it is importan...

Full description

Bibliographic Details
Main Authors: Yong Peng, Honggang Liu, Junhua Li, Jun Huang, Bao-Liang Lu, Wanzeng Kong
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Transactions on Neural Systems and Rehabilitation Engineering
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10003248/
Description
Summary:Since Electroencephalogram (EEG) is resistant to camouflage, it has been a reliable data source for objective emotion recognition. EEG is naturally multi-rhythm and multi-channel, based on which we can extract multiple features for further processing. In EEG-based emotion recognition, it is important to investigate whether there exist some common features shared by different emotional states, and the specific features associated with each emotional state. However, such fundamental problem is ignored by most of the existing studies. To this end, we propose a Joint label-Common and label-Specific Features Exploration (JCSFE) model for semi-supervised cross-session EEG emotion recognition in this paper. To be specific, JCSFE imposes the <inline-formula> <tex-math notation="LaTeX">$\ell _{\text {2,1}}$ </tex-math></inline-formula>-norm on the projection matrix to explore the label-common EEG features and simultaneously the <inline-formula> <tex-math notation="LaTeX">$\ell _{{1}}$ </tex-math></inline-formula>-norm is used to explore the label-specific EEG features. Besides, a graph regularization term is introduced to enforce the data local invariance property, i.e., similar EEG samples are encouraged to have the same emotional state. Results obtained from the SEED-IV and SEED-V emotional data sets experimentally demonstrate that JCSFE not only achieves superior emotion recognition performance in comparison with the state-of-the-art models but also provides us with a quantitative method to identify the label-common and label-specific EEG features in emotion recognition.
ISSN:1558-0210