TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds

LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder th...

Full description

Bibliographic Details
Main Authors: Xuan, Weihao, Qi, Heli, Xiao, Aoran.
Other Authors: College of Computing and Data Science
Format: Journal Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180795
_version_ 1826123437479821312
author Xuan, Weihao
Qi, Heli
Xiao, Aoran.
author2 College of Computing and Data Science
author_facet College of Computing and Data Science
Xuan, Weihao
Qi, Heli
Xiao, Aoran.
author_sort Xuan, Weihao
collection NTU
description LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder the widespread adoption of LiDAR systems. Semi-supervised learning (SSL) offers a promising solution by leveraging only a small amount of labeled data and a larger set of unlabeled data, aiming to train robust models with desired accuracy comparable to fully supervised learning. A typical pipeline of SSL involves the initial use of labeled data to train segmentation models, followed by the utilization of predictions generated from unlabeled data, which are used as pseudo-ground truths for model retraining. However, the scarcity of labeled data limits the capture of comprehensive representations, leading to the constraints of these pseudo-ground truths in reliability. We observed that objects captured by LiDAR sensors from varying perspectives showcase diverse data characteristics due to occlusions and distance variation, and LiDAR segmentation models trained with limited labels prove susceptible to these viewpoint disparities, resulting in inaccurately predicted pseudo-ground truths across viewpoints and the accumulation of retraining errors. To address this problem, we introduce the Temporal-Selective Guided Learning (TSG-Seg) framework. TSG-Seg explores temporal cues inherent in LiDAR frames to bridge the cross-viewpoint representations, fostering consistent and robust segmentation predictions across differing viewpoints. Specifically, we first establish point-wise correspondences across LiDAR frames with different time stamps through point registration. Subsequently, reliable point predictions are selected and propagated to points from adjacent views to the current view, serving as strong and refined supervision signals for subsequent model re-training to achieve better segmentation. We conducted extensive experiments on various SSL labeling setups across multiple public datasets, including SemanticKITTI and SemanticPOSS, to evaluate the effectiveness of TSG-Seg. Our results demonstrate its competitive performance and robustness in diverse scenarios, from data-limited to data-abundant settings. Notably, TSG-Seg achieves a mIoU of 48.6% using only 5% of and 62.3% with 40% of labeled data in the sequential split on SemanticKITTI. This consistently outperforms state-of-the-art segmentation methods, including GPC and LaserMix. These findings underscore TSG-Seg's superior capability and potential for real-world applications. The project can be found at https://tsgseg.github.io.
first_indexed 2025-03-09T13:30:20Z
format Journal Article
id ntu-10356/180795
institution Nanyang Technological University
language English
last_indexed 2025-03-09T13:30:20Z
publishDate 2024
record_format dspace
spelling ntu-10356/1807952024-10-28T01:22:09Z TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds Xuan, Weihao Qi, Heli Xiao, Aoran. College of Computing and Data Science Computer and Information Science 3D point cloud LiDAR LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder the widespread adoption of LiDAR systems. Semi-supervised learning (SSL) offers a promising solution by leveraging only a small amount of labeled data and a larger set of unlabeled data, aiming to train robust models with desired accuracy comparable to fully supervised learning. A typical pipeline of SSL involves the initial use of labeled data to train segmentation models, followed by the utilization of predictions generated from unlabeled data, which are used as pseudo-ground truths for model retraining. However, the scarcity of labeled data limits the capture of comprehensive representations, leading to the constraints of these pseudo-ground truths in reliability. We observed that objects captured by LiDAR sensors from varying perspectives showcase diverse data characteristics due to occlusions and distance variation, and LiDAR segmentation models trained with limited labels prove susceptible to these viewpoint disparities, resulting in inaccurately predicted pseudo-ground truths across viewpoints and the accumulation of retraining errors. To address this problem, we introduce the Temporal-Selective Guided Learning (TSG-Seg) framework. TSG-Seg explores temporal cues inherent in LiDAR frames to bridge the cross-viewpoint representations, fostering consistent and robust segmentation predictions across differing viewpoints. Specifically, we first establish point-wise correspondences across LiDAR frames with different time stamps through point registration. Subsequently, reliable point predictions are selected and propagated to points from adjacent views to the current view, serving as strong and refined supervision signals for subsequent model re-training to achieve better segmentation. We conducted extensive experiments on various SSL labeling setups across multiple public datasets, including SemanticKITTI and SemanticPOSS, to evaluate the effectiveness of TSG-Seg. Our results demonstrate its competitive performance and robustness in diverse scenarios, from data-limited to data-abundant settings. Notably, TSG-Seg achieves a mIoU of 48.6% using only 5% of and 62.3% with 40% of labeled data in the sequential split on SemanticKITTI. This consistently outperforms state-of-the-art segmentation methods, including GPC and LaserMix. These findings underscore TSG-Seg's superior capability and potential for real-world applications. The project can be found at https://tsgseg.github.io. 2024-10-28T01:22:09Z 2024-10-28T01:22:09Z 2024 Journal Article Xuan, W., Qi, H. & Xiao, A. (2024). TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds. ISPRS Journal of Photogrammetry and Remote Sensing, 216, 217-228. https://dx.doi.org/10.1016/j.isprsjprs.2024.07.020 0924-2716 https://hdl.handle.net/10356/180795 10.1016/j.isprsjprs.2024.07.020 2-s2.0-85200633791 216 217 228 en ISPRS Journal of Photogrammetry and Remote Sensing © 2024 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
spellingShingle Computer and Information Science
3D point cloud
LiDAR
Xuan, Weihao
Qi, Heli
Xiao, Aoran.
TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds
title TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds
title_full TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds
title_fullStr TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds
title_full_unstemmed TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds
title_short TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds
title_sort tsg seg temporal selective guidance for semi supervised semantic segmentation of 3d lidar point clouds
topic Computer and Information Science
3D point cloud
LiDAR
url https://hdl.handle.net/10356/180795
work_keys_str_mv AT xuanweihao tsgsegtemporalselectiveguidanceforsemisupervisedsemanticsegmentationof3dlidarpointclouds
AT qiheli tsgsegtemporalselectiveguidanceforsemisupervisedsemanticsegmentationof3dlidarpointclouds
AT xiaoaoran tsgsegtemporalselectiveguidanceforsemisupervisedsemanticsegmentationof3dlidarpointclouds