SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification
Brain magnetic resonance images (MRI) convey vital information for making diagnostic decisions and are widely used to detect brain tumors. This research proposes a self-supervised pre-training method based on feature representation learning through contrastive loss applied to unlabeled data. Self-su...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10018340/ |
_version_ | 1797945063198687232 |
---|---|
author | Animesh Mishra Ritesh Jha Vandana Bhattacharjee |
author_facet | Animesh Mishra Ritesh Jha Vandana Bhattacharjee |
author_sort | Animesh Mishra |
collection | DOAJ |
description | Brain magnetic resonance images (MRI) convey vital information for making diagnostic decisions and are widely used to detect brain tumors. This research proposes a self-supervised pre-training method based on feature representation learning through contrastive loss applied to unlabeled data. Self-supervised learning aims to understand vital features using the raw input, which is helpful since labeled data is scarce and expensive. For the contrastive loss-based pre-training, data augmentation is applied to the dataset, and positive and negative instance pairs are fed into a deep learning model for feature learning. Subsequently, the features are passed through a neural network model to maximize similarity and contrastive learning of the instances. This pre-trained model serves as an encoder for supervised training and then the classification of MRI images. Our results show that self-supervised pre-training with contrastive loss performs better than random or ImageNet initialization. We also show that contrastive learning performs better when the diversity of images in the pre-training dataset is more. We have taken three differently sized ResNet models as the base models. Further, experiments were also conducted to study the effect of changing the augmentation types for generating positive and negative samples for self-supervised training. |
first_indexed | 2024-04-10T20:49:21Z |
format | Article |
id | doaj.art-a4ff21c9658542ff892f12665b35ad72 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-04-10T20:49:21Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-a4ff21c9658542ff892f12665b35ad722023-01-24T00:00:50ZengIEEEIEEE Access2169-35362023-01-01116673668110.1109/ACCESS.2023.323754210018340SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI ClassificationAnimesh Mishra0Ritesh Jha1https://orcid.org/0000-0003-3293-5954Vandana Bhattacharjee2https://orcid.org/0000-0002-0680-2691Birla Institute of Technology, Mesra, Ranchi, IndiaBirla Institute of Technology, Mesra, Ranchi, IndiaBirla Institute of Technology, Mesra, Ranchi, IndiaBrain magnetic resonance images (MRI) convey vital information for making diagnostic decisions and are widely used to detect brain tumors. This research proposes a self-supervised pre-training method based on feature representation learning through contrastive loss applied to unlabeled data. Self-supervised learning aims to understand vital features using the raw input, which is helpful since labeled data is scarce and expensive. For the contrastive loss-based pre-training, data augmentation is applied to the dataset, and positive and negative instance pairs are fed into a deep learning model for feature learning. Subsequently, the features are passed through a neural network model to maximize similarity and contrastive learning of the instances. This pre-trained model serves as an encoder for supervised training and then the classification of MRI images. Our results show that self-supervised pre-training with contrastive loss performs better than random or ImageNet initialization. We also show that contrastive learning performs better when the diversity of images in the pre-training dataset is more. We have taken three differently sized ResNet models as the base models. Further, experiments were also conducted to study the effect of changing the augmentation types for generating positive and negative samples for self-supervised training.https://ieeexplore.ieee.org/document/10018340/Contrastive learningconvolutional neural networkspre-trainingResNetself-supervised |
spellingShingle | Animesh Mishra Ritesh Jha Vandana Bhattacharjee SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification IEEE Access Contrastive learning convolutional neural networks pre-training ResNet self-supervised |
title | SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification |
title_full | SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification |
title_fullStr | SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification |
title_full_unstemmed | SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification |
title_short | SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification |
title_sort | ssclnet a self supervised contrastive loss based pre trained network for brain mri classification |
topic | Contrastive learning convolutional neural networks pre-training ResNet self-supervised |
url | https://ieeexplore.ieee.org/document/10018340/ |
work_keys_str_mv | AT animeshmishra ssclnetaselfsupervisedcontrastivelossbasedpretrainednetworkforbrainmriclassification AT riteshjha ssclnetaselfsupervisedcontrastivelossbasedpretrainednetworkforbrainmriclassification AT vandanabhattacharjee ssclnetaselfsupervisedcontrastivelossbasedpretrainednetworkforbrainmriclassification |