Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility
Dimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpret...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-09-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/14/9/489 |
_version_ | 1797579568842801152 |
---|---|
author | Taufique Ahmed Luca Longo |
author_facet | Taufique Ahmed Luca Longo |
author_sort | Taufique Ahmed |
collection | DOAJ |
description | Dimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpretation is used to detect only the out-of-order distribution latent variable for anomaly detection. However, the interpretation and visualisation of all latent space components disclose information about how the model arrives at its conclusion. The main contribution of this study is interpreting the disentangled representation of VAE by activating only one latent component at a time, whereas the values for the remaining components are set to zero because it is the mean of the distribution. The results show that CNN-VAE works well, as indicated by matrices such as SSIM, MSE, MAE, and MAPE, along with SNR and correlation coefficient values throughout the architecture’s input and output. Furthermore, visual plausibility and clustering demonstrate that each component contributes differently to capturing the generative factors in topographic maps. Our proposed pipeline adds to the body of knowledge by delivering a CNN-VAE-based latent space interpretation model. This helps us learn the model’s decision and the importance of each component of latent space responsible for activating parts of the brain. |
first_indexed | 2024-03-10T22:37:59Z |
format | Article |
id | doaj.art-37df3548904c453aa69730dcf386111d |
institution | Directory Open Access Journal |
issn | 2078-2489 |
language | English |
last_indexed | 2024-03-10T22:37:59Z |
publishDate | 2023-09-01 |
publisher | MDPI AG |
record_format | Article |
series | Information |
spelling | doaj.art-37df3548904c453aa69730dcf386111d2023-11-19T11:13:58ZengMDPI AGInformation2078-24892023-09-0114948910.3390/info14090489Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual PlausibilityTaufique Ahmed0Luca Longo1Artificial Intelligence and Cognitive Load Lab, The Applied Intelligence Research Centre, School of Computer Science, Technological University Dublin, D07 EWV4 Dublin, IrelandArtificial Intelligence and Cognitive Load Lab, The Applied Intelligence Research Centre, School of Computer Science, Technological University Dublin, D07 EWV4 Dublin, IrelandDimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpretation is used to detect only the out-of-order distribution latent variable for anomaly detection. However, the interpretation and visualisation of all latent space components disclose information about how the model arrives at its conclusion. The main contribution of this study is interpreting the disentangled representation of VAE by activating only one latent component at a time, whereas the values for the remaining components are set to zero because it is the mean of the distribution. The results show that CNN-VAE works well, as indicated by matrices such as SSIM, MSE, MAE, and MAPE, along with SNR and correlation coefficient values throughout the architecture’s input and output. Furthermore, visual plausibility and clustering demonstrate that each component contributes differently to capturing the generative factors in topographic maps. Our proposed pipeline adds to the body of knowledge by delivering a CNN-VAE-based latent space interpretation model. This helps us learn the model’s decision and the importance of each component of latent space responsible for activating parts of the brain.https://www.mdpi.com/2078-2489/14/9/489electroencephalographyconvolutional variational autoencoderlatent space interpretationdeep learningspectral topographic maps |
spellingShingle | Taufique Ahmed Luca Longo Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility Information electroencephalography convolutional variational autoencoder latent space interpretation deep learning spectral topographic maps |
title | Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility |
title_full | Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility |
title_fullStr | Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility |
title_full_unstemmed | Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility |
title_short | Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility |
title_sort | interpreting disentangled representations of person specific convolutional variational autoencoders of spatially preserving eeg topographic maps via clustering and visual plausibility |
topic | electroencephalography convolutional variational autoencoder latent space interpretation deep learning spectral topographic maps |
url | https://www.mdpi.com/2078-2489/14/9/489 |
work_keys_str_mv | AT taufiqueahmed interpretingdisentangledrepresentationsofpersonspecificconvolutionalvariationalautoencodersofspatiallypreservingeegtopographicmapsviaclusteringandvisualplausibility AT lucalongo interpretingdisentangledrepresentationsofpersonspecificconvolutionalvariationalautoencodersofspatiallypreservingeegtopographicmapsviaclusteringandvisualplausibility |