Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects

Visual perception is an important part of human life. In the context of facial recognition, it allows us to distinguish between emotions and important facial features that distinguish one person from another. However, subjects suffering from memory loss face significant facial processing problems. I...

Full description

Bibliographic Details
Main Authors: Dovilė Komolovaitė, Rytis Maskeliūnas, Robertas Damaševičius
Format: Article
Language:English
Published: MDPI AG 2022-03-01
Series:Life
Subjects:
Online Access:https://www.mdpi.com/2075-1729/12/3/374
_version_ 1797446037875458048
author Dovilė Komolovaitė
Rytis Maskeliūnas
Robertas Damaševičius
author_facet Dovilė Komolovaitė
Rytis Maskeliūnas
Robertas Damaševičius
author_sort Dovilė Komolovaitė
collection DOAJ
description Visual perception is an important part of human life. In the context of facial recognition, it allows us to distinguish between emotions and important facial features that distinguish one person from another. However, subjects suffering from memory loss face significant facial processing problems. If the perception of facial features is affected by memory impairment, then it is possible to classify visual stimuli using brain activity data from the visual processing regions of the brain. This study differentiates the aspects of familiarity and emotion by the inversion effect of the face and uses convolutional neural network (CNN) models (EEGNet, EEGNet SSVEP (steady-state visual evoked potentials), and DeepConvNet) to learn discriminative features from raw electroencephalography (EEG) signals. Due to the limited number of available EEG data samples, Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are introduced to generate synthetic EEG signals. The generated data are used to pretrain the models, and the learned weights are initialized to train them on the real EEG data. We investigate minor facial characteristics in brain signals and the ability of deep CNN models to learn them. The effect of face inversion was studied, and it was observed that the N170 component has a considerable and sustained delay. As a result, emotional and familiarity stimuli were divided into two categories based on the posture of the face. The categories of upright and inverted stimuli have the smallest incidences of confusion. The model’s ability to learn the face-inversion effect is demonstrated once more.
first_indexed 2024-03-09T13:34:32Z
format Article
id doaj.art-d1598a6f7d804b4aaa4248520cb9eaed
institution Directory Open Access Journal
issn 2075-1729
language English
last_indexed 2024-03-09T13:34:32Z
publishDate 2022-03-01
publisher MDPI AG
record_format Article
series Life
spelling doaj.art-d1598a6f7d804b4aaa4248520cb9eaed2023-11-30T21:13:49ZengMDPI AGLife2075-17292022-03-0112337410.3390/life12030374Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease SubjectsDovilė Komolovaitė0Rytis Maskeliūnas1Robertas Damaševičius2Department of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, LithuaniaDepartment of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, LithuaniaDepartment of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, LithuaniaVisual perception is an important part of human life. In the context of facial recognition, it allows us to distinguish between emotions and important facial features that distinguish one person from another. However, subjects suffering from memory loss face significant facial processing problems. If the perception of facial features is affected by memory impairment, then it is possible to classify visual stimuli using brain activity data from the visual processing regions of the brain. This study differentiates the aspects of familiarity and emotion by the inversion effect of the face and uses convolutional neural network (CNN) models (EEGNet, EEGNet SSVEP (steady-state visual evoked potentials), and DeepConvNet) to learn discriminative features from raw electroencephalography (EEG) signals. Due to the limited number of available EEG data samples, Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are introduced to generate synthetic EEG signals. The generated data are used to pretrain the models, and the learned weights are initialized to train them on the real EEG data. We investigate minor facial characteristics in brain signals and the ability of deep CNN models to learn them. The effect of face inversion was studied, and it was observed that the N170 component has a considerable and sustained delay. As a result, emotional and familiarity stimuli were divided into two categories based on the posture of the face. The categories of upright and inverted stimuli have the smallest incidences of confusion. The model’s ability to learn the face-inversion effect is demonstrated once more.https://www.mdpi.com/2075-1729/12/3/374Alzheimer’s diseaseelectroencephalogramSSVEPvisual stimuli classificationface inversiongenerative adversarial networks
spellingShingle Dovilė Komolovaitė
Rytis Maskeliūnas
Robertas Damaševičius
Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects
Life
Alzheimer’s disease
electroencephalogram
SSVEP
visual stimuli classification
face inversion
generative adversarial networks
title Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects
title_full Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects
title_fullStr Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects
title_full_unstemmed Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects
title_short Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects
title_sort deep convolutional neural network based visual stimuli classification using electroencephalography signals of healthy and alzheimer s disease subjects
topic Alzheimer’s disease
electroencephalogram
SSVEP
visual stimuli classification
face inversion
generative adversarial networks
url https://www.mdpi.com/2075-1729/12/3/374
work_keys_str_mv AT dovilekomolovaite deepconvolutionalneuralnetworkbasedvisualstimuliclassificationusingelectroencephalographysignalsofhealthyandalzheimersdiseasesubjects
AT rytismaskeliunas deepconvolutionalneuralnetworkbasedvisualstimuliclassificationusingelectroencephalographysignalsofhealthyandalzheimersdiseasesubjects
AT robertasdamasevicius deepconvolutionalneuralnetworkbasedvisualstimuliclassificationusingelectroencephalographysignalsofhealthyandalzheimersdiseasesubjects