GAN-Based Image Colorization for Self-Supervised Visual Feature Learning
Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual feature...
Glavni autori: | , , , , |
---|---|
Format: | Članak |
Jezik: | English |
Izdano: |
MDPI AG
2022-02-01
|
Serija: | Sensors |
Teme: | |
Online pristup: | https://www.mdpi.com/1424-8220/22/4/1599 |
_version_ | 1827652822471016448 |
---|---|
author | Sandra Treneska Eftim Zdravevski Ivan Miguel Pires Petre Lameski Sonja Gievska |
author_facet | Sandra Treneska Eftim Zdravevski Ivan Miguel Pires Petre Lameski Sonja Gievska |
author_sort | Sandra Treneska |
collection | DOAJ |
description | Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual features automatically. In this paper, we first focus on image colorization with generative adversarial networks (GANs) because of their ability to generate the most realistic colorization results. Then, via transfer learning, we use this as a proxy task for visual understanding. Particularly, we propose to use conditional GANs (cGANs) for image colorization and transfer the gained knowledge to two other downstream tasks, namely, multilabel image classification and semantic segmentation. This is the first time that GANs have been used for self-supervised feature learning through image colorization. Through extensive experiments with the COCO and Pascal datasets, we show an increase of 5% for the classification task and 2.5% for the segmentation task. This demonstrates that image colorization with conditional GANs can boost other downstream tasks’ performance without the need for manual annotation. |
first_indexed | 2024-03-09T21:05:34Z |
format | Article |
id | doaj.art-d289daca6c354f958280e355a8e03583 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T21:05:34Z |
publishDate | 2022-02-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-d289daca6c354f958280e355a8e035832023-11-23T22:02:04ZengMDPI AGSensors1424-82202022-02-01224159910.3390/s22041599GAN-Based Image Colorization for Self-Supervised Visual Feature LearningSandra Treneska0Eftim Zdravevski1Ivan Miguel Pires2Petre Lameski3Sonja Gievska4Faculty of Computer Science and Engineering, University Ss. Cyril and Methodius, 1000 Skopje, North MacedoniaFaculty of Computer Science and Engineering, University Ss. Cyril and Methodius, 1000 Skopje, North MacedoniaInstituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, PortugalFaculty of Computer Science and Engineering, University Ss. Cyril and Methodius, 1000 Skopje, North MacedoniaFaculty of Computer Science and Engineering, University Ss. Cyril and Methodius, 1000 Skopje, North MacedoniaLarge-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual features automatically. In this paper, we first focus on image colorization with generative adversarial networks (GANs) because of their ability to generate the most realistic colorization results. Then, via transfer learning, we use this as a proxy task for visual understanding. Particularly, we propose to use conditional GANs (cGANs) for image colorization and transfer the gained knowledge to two other downstream tasks, namely, multilabel image classification and semantic segmentation. This is the first time that GANs have been used for self-supervised feature learning through image colorization. Through extensive experiments with the COCO and Pascal datasets, we show an increase of 5% for the classification task and 2.5% for the segmentation task. This demonstrates that image colorization with conditional GANs can boost other downstream tasks’ performance without the need for manual annotation.https://www.mdpi.com/1424-8220/22/4/1599self-supervised learningtransfer learningimage colorizationconvolutional neural networkgenerative adversarial network |
spellingShingle | Sandra Treneska Eftim Zdravevski Ivan Miguel Pires Petre Lameski Sonja Gievska GAN-Based Image Colorization for Self-Supervised Visual Feature Learning Sensors self-supervised learning transfer learning image colorization convolutional neural network generative adversarial network |
title | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_full | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_fullStr | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_full_unstemmed | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_short | GAN-Based Image Colorization for Self-Supervised Visual Feature Learning |
title_sort | gan based image colorization for self supervised visual feature learning |
topic | self-supervised learning transfer learning image colorization convolutional neural network generative adversarial network |
url | https://www.mdpi.com/1424-8220/22/4/1599 |
work_keys_str_mv | AT sandratreneska ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT eftimzdravevski ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT ivanmiguelpires ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT petrelameski ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning AT sonjagievska ganbasedimagecolorizationforselfsupervisedvisualfeaturelearning |