Deep Learning Architecture Reduction for fMRI Data

In recent years, deep learning models have demonstrated an inherently better ability to tackle non-linear classification tasks, due to advances in deep learning architectures. However, much remains to be achieved, especially in designing deep convolutional neural network (CNN) configurations. The nu...

Full description

Bibliographic Details
Main Authors: Ruben Alvarez-Gonzalez, Andres Mendez-Vazquez
Format: Article
Language:English
Published: MDPI AG 2022-02-01
Series:Brain Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3425/12/2/235
_version_ 1797482178153545728
author Ruben Alvarez-Gonzalez
Andres Mendez-Vazquez
author_facet Ruben Alvarez-Gonzalez
Andres Mendez-Vazquez
author_sort Ruben Alvarez-Gonzalez
collection DOAJ
description In recent years, deep learning models have demonstrated an inherently better ability to tackle non-linear classification tasks, due to advances in deep learning architectures. However, much remains to be achieved, especially in designing deep convolutional neural network (CNN) configurations. The number of hyper-parameters that need to be optimized to achieve accuracy in classification problems increases with every layer used, and the selection of kernels in each CNN layer has an impact on the overall CNN performance in the training stage, as well as in the classification process. When a popular classifier fails to perform acceptably in practical applications, it may be due to deficiencies in the algorithm and data processing. Thus, understanding the feature extraction process provides insights to help optimize pre-trained architectures, better generalize the models, and obtain the context of each layer’s features. In this work, we aim to improve feature extraction through the use of a texture amortization map (TAM). An algorithm was developed to obtain characteristics from the filters amortizing the filter’s effect depending on the texture of the neighboring pixels. From the initial algorithm, a novel geometric classification score (GCS) was developed, in order to obtain a measure that indicates the effect of one class on another in a classification problem, in terms of the complexity of the learnability in every layer of the deep learning architecture. For this, we assume that all the data transformations in the inner layers still belong to a Euclidean space. In this scenario, we can evaluate which layers provide the best transformations in a CNN, allowing us to reduce the weights of the deep learning architecture using the geometric hypothesis.
first_indexed 2024-03-09T22:27:32Z
format Article
id doaj.art-f67e216978f148f58590db4df210e482
institution Directory Open Access Journal
issn 2076-3425
language English
last_indexed 2024-03-09T22:27:32Z
publishDate 2022-02-01
publisher MDPI AG
record_format Article
series Brain Sciences
spelling doaj.art-f67e216978f148f58590db4df210e4822023-11-23T19:03:41ZengMDPI AGBrain Sciences2076-34252022-02-0112223510.3390/brainsci12020235Deep Learning Architecture Reduction for fMRI DataRuben Alvarez-Gonzalez0Andres Mendez-Vazquez1Department of Computer Science, Cinvestav Guadalajara, Zapopan 45015, MexicoDepartment of Computer Science, Cinvestav Guadalajara, Zapopan 45015, MexicoIn recent years, deep learning models have demonstrated an inherently better ability to tackle non-linear classification tasks, due to advances in deep learning architectures. However, much remains to be achieved, especially in designing deep convolutional neural network (CNN) configurations. The number of hyper-parameters that need to be optimized to achieve accuracy in classification problems increases with every layer used, and the selection of kernels in each CNN layer has an impact on the overall CNN performance in the training stage, as well as in the classification process. When a popular classifier fails to perform acceptably in practical applications, it may be due to deficiencies in the algorithm and data processing. Thus, understanding the feature extraction process provides insights to help optimize pre-trained architectures, better generalize the models, and obtain the context of each layer’s features. In this work, we aim to improve feature extraction through the use of a texture amortization map (TAM). An algorithm was developed to obtain characteristics from the filters amortizing the filter’s effect depending on the texture of the neighboring pixels. From the initial algorithm, a novel geometric classification score (GCS) was developed, in order to obtain a measure that indicates the effect of one class on another in a classification problem, in terms of the complexity of the learnability in every layer of the deep learning architecture. For this, we assume that all the data transformations in the inner layers still belong to a Euclidean space. In this scenario, we can evaluate which layers provide the best transformations in a CNN, allowing us to reduce the weights of the deep learning architecture using the geometric hypothesis.https://www.mdpi.com/2076-3425/12/2/235CNNmachine learningdeep learningcomputer visiontransfer learning
spellingShingle Ruben Alvarez-Gonzalez
Andres Mendez-Vazquez
Deep Learning Architecture Reduction for fMRI Data
Brain Sciences
CNN
machine learning
deep learning
computer vision
transfer learning
title Deep Learning Architecture Reduction for fMRI Data
title_full Deep Learning Architecture Reduction for fMRI Data
title_fullStr Deep Learning Architecture Reduction for fMRI Data
title_full_unstemmed Deep Learning Architecture Reduction for fMRI Data
title_short Deep Learning Architecture Reduction for fMRI Data
title_sort deep learning architecture reduction for fmri data
topic CNN
machine learning
deep learning
computer vision
transfer learning
url https://www.mdpi.com/2076-3425/12/2/235
work_keys_str_mv AT rubenalvarezgonzalez deeplearningarchitecturereductionforfmridata
AT andresmendezvazquez deeplearningarchitecturereductionforfmridata