On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps
Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human interventi...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-11-01
|
Series: | Machine Learning and Knowledge Extraction |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-4990/4/4/53 |
_version_ | 1797456567027630080 |
---|---|
author | Arjun Vinayak Chikkankod Luca Longo |
author_facet | Arjun Vinayak Chikkankod Luca Longo |
author_sort | Arjun Vinayak Chikkankod |
collection | DOAJ |
description | Electroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>25</mn><mo>%</mo></mrow></semantics></math></inline-formula> of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders. |
first_indexed | 2024-03-09T16:09:40Z |
format | Article |
id | doaj.art-07f3eba3256e49f2b22718c444616f19 |
institution | Directory Open Access Journal |
issn | 2504-4990 |
language | English |
last_indexed | 2024-03-09T16:09:40Z |
publishDate | 2022-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Machine Learning and Knowledge Extraction |
spelling | doaj.art-07f3eba3256e49f2b22718c444616f192023-11-24T16:19:04ZengMDPI AGMachine Learning and Knowledge Extraction2504-49902022-11-01441042106410.3390/make4040053On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-MapsArjun Vinayak Chikkankod0Luca Longo1Artificial Intelligence and Cognitive Load Lab, The Applied Intelligence Research Centre, School of Computer Science, Technological University Dublin (TU Dublin), D07 EWV4 Dublin, IrelandArtificial Intelligence and Cognitive Load Lab, The Applied Intelligence Research Centre, School of Computer Science, Technological University Dublin (TU Dublin), D07 EWV4 Dublin, IrelandElectroencephalography (EEG) signals can be analyzed in the temporal, spatial, or frequency domains. Noise and artifacts during the data acquisition phase contaminate these signals adding difficulties in their analysis. Techniques such as Independent Component Analysis (ICA) require human intervention to remove noise and artifacts. Autoencoders have automatized artifact detection and removal by representing inputs in a lower dimensional latent space. However, little research is devoted to understanding the minimum dimension of such latent space that allows meaningful input reconstruction. Person-specific convolutional autoencoders are designed by manipulating the size of their latent space. A sliding window technique with overlapping is employed to segment varied-sized windows. Five topographic head-maps are formed in the frequency domain for each window. The latent space of autoencoders is assessed using the input reconstruction capacity and classification utility. Findings indicate that the minimal latent space dimension is <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>25</mn><mo>%</mo></mrow></semantics></math></inline-formula> of the size of the topographic maps for achieving maximum reconstruction capacity and maximizing classification accuracy, which is achieved with a window length of at least 1 s and a shift of 125 ms, using the 128 Hz sampling rate. This research contributes to the body of knowledge with an architectural pipeline for eliminating redundant EEG data while preserving relevant features with deep autoencoders.https://www.mdpi.com/2504-4990/4/4/53electroencephalographylatent space analysissliding windowingconvolutional autoencodersautomatic feature extractiondense neural network |
spellingShingle | Arjun Vinayak Chikkankod Luca Longo On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps Machine Learning and Knowledge Extraction electroencephalography latent space analysis sliding windowing convolutional autoencoders automatic feature extraction dense neural network |
title | On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps |
title_full | On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps |
title_fullStr | On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps |
title_full_unstemmed | On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps |
title_short | On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps |
title_sort | on the dimensionality and utility of convolutional autoencoder s latent space trained with topology preserving spectral eeg head maps |
topic | electroencephalography latent space analysis sliding windowing convolutional autoencoders automatic feature extraction dense neural network |
url | https://www.mdpi.com/2504-4990/4/4/53 |
work_keys_str_mv | AT arjunvinayakchikkankod onthedimensionalityandutilityofconvolutionalautoencoderslatentspacetrainedwithtopologypreservingspectraleegheadmaps AT lucalongo onthedimensionalityandutilityofconvolutionalautoencoderslatentspacetrainedwithtopologypreservingspectraleegheadmaps |