Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning
SAR-optical images from different sensors can provide consistent information for scene classification. However, the utilization of unlabeled SAR-optical images in deep learning-based remote sensing image interpretation remains an open issue. In recent years, contrastive self-supervised learning (CSS...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-09-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/14/18/4632 |
_version_ | 1797482785285341184 |
---|---|
author | Chenfang Liu Hao Sun Yanjie Xu Gangyao Kuang |
author_facet | Chenfang Liu Hao Sun Yanjie Xu Gangyao Kuang |
author_sort | Chenfang Liu |
collection | DOAJ |
description | SAR-optical images from different sensors can provide consistent information for scene classification. However, the utilization of unlabeled SAR-optical images in deep learning-based remote sensing image interpretation remains an open issue. In recent years, contrastive self-supervised learning (CSSL) methods have shown great potential for obtaining meaningful feature representations from massive amounts of unlabeled data. This paper investigates the effectiveness of CSSL-based pretraining models for SAR-optical remote-sensing classification. Firstly, we analyze the contrastive strategies of single-source and multi-source SAR-optical data augmentation under different CSSL architectures. We find that the CSSL framework without explicit negative sample selection naturally fits the multi-source learning problem. Secondly, we find that the registered SAR-optical images can guide the Siamese self-supervised network without negative samples to learn shared features, which is also the reason why the CSSL framework outperforms the CSSL framework with negative samples. Finally, we apply the CSSL pretrained network without negative samples that can learn the shared features of SAR-optical images to the downstream domain adaptation task of optical transfer to SAR images. We find that the choice of a pretrained network is important for downstream tasks. |
first_indexed | 2024-03-09T22:37:27Z |
format | Article |
id | doaj.art-200e1672c7f54d54a97eee39978e1a21 |
institution | Directory Open Access Journal |
issn | 2072-4292 |
language | English |
last_indexed | 2024-03-09T22:37:27Z |
publishDate | 2022-09-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj.art-200e1672c7f54d54a97eee39978e1a212023-11-23T18:45:52ZengMDPI AGRemote Sensing2072-42922022-09-011418463210.3390/rs14184632Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised LearningChenfang Liu0Hao Sun1Yanjie Xu2Gangyao Kuang3State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, ChinaState Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, ChinaState Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, ChinaState Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, ChinaSAR-optical images from different sensors can provide consistent information for scene classification. However, the utilization of unlabeled SAR-optical images in deep learning-based remote sensing image interpretation remains an open issue. In recent years, contrastive self-supervised learning (CSSL) methods have shown great potential for obtaining meaningful feature representations from massive amounts of unlabeled data. This paper investigates the effectiveness of CSSL-based pretraining models for SAR-optical remote-sensing classification. Firstly, we analyze the contrastive strategies of single-source and multi-source SAR-optical data augmentation under different CSSL architectures. We find that the CSSL framework without explicit negative sample selection naturally fits the multi-source learning problem. Secondly, we find that the registered SAR-optical images can guide the Siamese self-supervised network without negative samples to learn shared features, which is also the reason why the CSSL framework outperforms the CSSL framework with negative samples. Finally, we apply the CSSL pretrained network without negative samples that can learn the shared features of SAR-optical images to the downstream domain adaptation task of optical transfer to SAR images. We find that the choice of a pretrained network is important for downstream tasks.https://www.mdpi.com/2072-4292/14/18/4632multi-sourcecontrastive self-supervised learningpretrainingSAR-optical |
spellingShingle | Chenfang Liu Hao Sun Yanjie Xu Gangyao Kuang Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning Remote Sensing multi-source contrastive self-supervised learning pretraining SAR-optical |
title | Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning |
title_full | Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning |
title_fullStr | Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning |
title_full_unstemmed | Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning |
title_short | Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning |
title_sort | multi source remote sensing pretraining based on contrastive self supervised learning |
topic | multi-source contrastive self-supervised learning pretraining SAR-optical |
url | https://www.mdpi.com/2072-4292/14/18/4632 |
work_keys_str_mv | AT chenfangliu multisourceremotesensingpretrainingbasedoncontrastiveselfsupervisedlearning AT haosun multisourceremotesensingpretrainingbasedoncontrastiveselfsupervisedlearning AT yanjiexu multisourceremotesensingpretrainingbasedoncontrastiveselfsupervisedlearning AT gangyaokuang multisourceremotesensingpretrainingbasedoncontrastiveselfsupervisedlearning |