LSNet: Learned Sampling Network for 3D Object Detection from Point Clouds
The3D object detection of LiDAR point cloud data has generated widespread discussion and implementation in recent years. In this paper, we concentrate on exploring the sampling method of point-based 3D object detection in autonomous driving scenarios, a process which attempts to reduce expenditure b...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-03-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/14/7/1539 |
_version_ | 1827622617919520768 |
---|---|
author | Mingming Wang Qingkui Chen Zhibing Fu |
author_facet | Mingming Wang Qingkui Chen Zhibing Fu |
author_sort | Mingming Wang |
collection | DOAJ |
description | The3D object detection of LiDAR point cloud data has generated widespread discussion and implementation in recent years. In this paper, we concentrate on exploring the sampling method of point-based 3D object detection in autonomous driving scenarios, a process which attempts to reduce expenditure by reaching sufficient accuracy using fewer selected points. FPS (farthest point sampling), the most used sampling method, works poorly in small sampling size cases, and, limited by the massive points, some newly proposed sampling methods using deep learning are not suitable for autonomous driving scenarios. To address these issues, we propose the learned sampling network (LSNet), a single-stage 3D object detection network containing an LS module that can sample important points through deep learning. This advanced approach can sample points with a task-specific focus while also being differentiable. Additionally, the LS module is streamlined for computational efficiency and transferability to replace more primitive sampling methods in other point-based networks. To reduce the issue of the high repetition rates of sampled points, a sampling loss algorithm was developed. The LS module was validated with the KITTI dataset and outperformed the other sampling methods, such as FPS and F-FPS (FPS based on feature distance). Finally, LSNet achieves acceptable accuracy with only 128 sampled points and shows promising results when the number of sampled points is small, yielding up to a 60% improvement against competing methods with eight sampled points. |
first_indexed | 2024-03-09T11:29:12Z |
format | Article |
id | doaj.art-1c518928ec764832a4c04bb61b1cf42d |
institution | Directory Open Access Journal |
issn | 2072-4292 |
language | English |
last_indexed | 2024-03-09T11:29:12Z |
publishDate | 2022-03-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj.art-1c518928ec764832a4c04bb61b1cf42d2023-11-30T23:55:27ZengMDPI AGRemote Sensing2072-42922022-03-01147153910.3390/rs14071539LSNet: Learned Sampling Network for 3D Object Detection from Point CloudsMingming Wang0Qingkui Chen1Zhibing Fu2Department of Systems Science, Business School, University of Shanghai for Science and Technology, Shanghai 200093, ChinaDepartment of Systems Science, Business School, University of Shanghai for Science and Technology, Shanghai 200093, ChinaDepartment of Systems Science, Business School, University of Shanghai for Science and Technology, Shanghai 200093, ChinaThe3D object detection of LiDAR point cloud data has generated widespread discussion and implementation in recent years. In this paper, we concentrate on exploring the sampling method of point-based 3D object detection in autonomous driving scenarios, a process which attempts to reduce expenditure by reaching sufficient accuracy using fewer selected points. FPS (farthest point sampling), the most used sampling method, works poorly in small sampling size cases, and, limited by the massive points, some newly proposed sampling methods using deep learning are not suitable for autonomous driving scenarios. To address these issues, we propose the learned sampling network (LSNet), a single-stage 3D object detection network containing an LS module that can sample important points through deep learning. This advanced approach can sample points with a task-specific focus while also being differentiable. Additionally, the LS module is streamlined for computational efficiency and transferability to replace more primitive sampling methods in other point-based networks. To reduce the issue of the high repetition rates of sampled points, a sampling loss algorithm was developed. The LS module was validated with the KITTI dataset and outperformed the other sampling methods, such as FPS and F-FPS (FPS based on feature distance). Finally, LSNet achieves acceptable accuracy with only 128 sampled points and shows promising results when the number of sampled points is small, yielding up to a 60% improvement against competing methods with eight sampled points.https://www.mdpi.com/2072-4292/14/7/15393D object detectionpoint cloudsamplingsingle-stage |
spellingShingle | Mingming Wang Qingkui Chen Zhibing Fu LSNet: Learned Sampling Network for 3D Object Detection from Point Clouds Remote Sensing 3D object detection point cloud sampling single-stage |
title | LSNet: Learned Sampling Network for 3D Object Detection from Point Clouds |
title_full | LSNet: Learned Sampling Network for 3D Object Detection from Point Clouds |
title_fullStr | LSNet: Learned Sampling Network for 3D Object Detection from Point Clouds |
title_full_unstemmed | LSNet: Learned Sampling Network for 3D Object Detection from Point Clouds |
title_short | LSNet: Learned Sampling Network for 3D Object Detection from Point Clouds |
title_sort | lsnet learned sampling network for 3d object detection from point clouds |
topic | 3D object detection point cloud sampling single-stage |
url | https://www.mdpi.com/2072-4292/14/7/1539 |
work_keys_str_mv | AT mingmingwang lsnetlearnedsamplingnetworkfor3dobjectdetectionfrompointclouds AT qingkuichen lsnetlearnedsamplingnetworkfor3dobjectdetectionfrompointclouds AT zhibingfu lsnetlearnedsamplingnetworkfor3dobjectdetectionfrompointclouds |