Revisiting Consistency for Semi-Supervised Semantic Segmentation
Semi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised al...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/23/2/940 |
_version_ | 1797437222305136640 |
---|---|
author | Ivan Grubišić Marin Oršić Siniša Šegvić |
author_facet | Ivan Grubišić Marin Oršić Siniša Šegvić |
author_sort | Ivan Grubišić |
collection | DOAJ |
description | Semi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised algorithms that enforce consistent predictions over perturbed unlabeled inputs. We study the advantages of perturbing only one of the two model instances and preventing the backward pass through the unperturbed instance. We also propose a competitive perturbation model as a composition of geometric warp and photometric jittering. We experiment with efficient models due to their importance for real-time and low-power applications. Our experiments show clear advantages of (1) one-way consistency, (2) perturbing only the student branch, and (3) strong photometric and geometric perturbations. Our perturbation model outperforms recent work and most of the contribution comes from the photometric component. Experiments with additional data from the large coarsely annotated subset of Cityscapes suggest that semi-supervised training can outperform supervised training with coarse labels. Our source code is available at https://github.com/Ivan1248/semisup-seg-efficient. |
first_indexed | 2024-03-09T11:15:45Z |
format | Article |
id | doaj.art-4333a27bfbfc48f698d3bb6982fff0a4 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T11:15:45Z |
publishDate | 2023-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-4333a27bfbfc48f698d3bb6982fff0a42023-12-01T00:30:09ZengMDPI AGSensors1424-82202023-01-0123294010.3390/s23020940Revisiting Consistency for Semi-Supervised Semantic SegmentationIvan Grubišić0Marin Oršić1Siniša Šegvić2Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, CroatiaMicroblink Ltd., Strojarska Cesta 20, 10000 Zagreb, CroatiaFaculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, CroatiaSemi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised algorithms that enforce consistent predictions over perturbed unlabeled inputs. We study the advantages of perturbing only one of the two model instances and preventing the backward pass through the unperturbed instance. We also propose a competitive perturbation model as a composition of geometric warp and photometric jittering. We experiment with efficient models due to their importance for real-time and low-power applications. Our experiments show clear advantages of (1) one-way consistency, (2) perturbing only the student branch, and (3) strong photometric and geometric perturbations. Our perturbation model outperforms recent work and most of the contribution comes from the photometric component. Experiments with additional data from the large coarsely annotated subset of Cityscapes suggest that semi-supervised training can outperform supervised training with coarse labels. Our source code is available at https://github.com/Ivan1248/semisup-seg-efficient.https://www.mdpi.com/1424-8220/23/2/940semi-supervised learningsemantic segmentationdense predictionone-way consistencydeep learningscene understanding |
spellingShingle | Ivan Grubišić Marin Oršić Siniša Šegvić Revisiting Consistency for Semi-Supervised Semantic Segmentation Sensors semi-supervised learning semantic segmentation dense prediction one-way consistency deep learning scene understanding |
title | Revisiting Consistency for Semi-Supervised Semantic Segmentation |
title_full | Revisiting Consistency for Semi-Supervised Semantic Segmentation |
title_fullStr | Revisiting Consistency for Semi-Supervised Semantic Segmentation |
title_full_unstemmed | Revisiting Consistency for Semi-Supervised Semantic Segmentation |
title_short | Revisiting Consistency for Semi-Supervised Semantic Segmentation |
title_sort | revisiting consistency for semi supervised semantic segmentation |
topic | semi-supervised learning semantic segmentation dense prediction one-way consistency deep learning scene understanding |
url | https://www.mdpi.com/1424-8220/23/2/940 |
work_keys_str_mv | AT ivangrubisic revisitingconsistencyforsemisupervisedsemanticsegmentation AT marinorsic revisitingconsistencyforsemisupervisedsemanticsegmentation AT sinisasegvic revisitingconsistencyforsemisupervisedsemanticsegmentation |