A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object Detection
Three-dimensional object detection plays a vital role in the field of environment perception in autonomous driving, and its results are crucial for the subsequent processes. Pillar-based 3D object detection is a method to detect objects in 3D by dividing point cloud data into pillars and extracting...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-06-01
|
Series: | World Electric Vehicle Journal |
Subjects: | |
Online Access: | https://www.mdpi.com/2032-6653/14/6/146 |
_version_ | 1797592257508933632 |
---|---|
author | Hao Xu Xiang Dong Wenxuan Wu Biao Yu Hui Zhu |
author_facet | Hao Xu Xiang Dong Wenxuan Wu Biao Yu Hui Zhu |
author_sort | Hao Xu |
collection | DOAJ |
description | Three-dimensional object detection plays a vital role in the field of environment perception in autonomous driving, and its results are crucial for the subsequent processes. Pillar-based 3D object detection is a method to detect objects in 3D by dividing point cloud data into pillars and extracting features from each pillar. However, the current pillar-based 3D object-detection methods suffer from problems such as “under-segmentation” and false detections in overlapping and occluded scenes. To address these challenges, we propose an improved pillar-based 3D object-detection network with a two-stage pillar feature-encoding (Ts-PFE) module that considers both inter- and intra-relational features among and in the pillars. This novel approach enhances the model’s ability to identify the local structure and global distribution of the data, which improves the distinction between objects in occluded and overlapping scenes and ultimately reduces under-segmentation and false detection problems. Furthermore, we use the attention mechanism to improve the backbone and make it focus on important features. The proposed approach is evaluated on the KITTI dataset. The experimental results show that the detection accuracy of the proposed approach are significantly improved on the benchmarks of BEV and 3D. The improvement of AP for car, pedestrian, and cyclist 3D detection are 1.1%, 3.78%, and 2.23% over PointPillars. |
first_indexed | 2024-03-11T01:48:53Z |
format | Article |
id | doaj.art-428dc2ad8f09444bb3ff508ac6c8575a |
institution | Directory Open Access Journal |
issn | 2032-6653 |
language | English |
last_indexed | 2024-03-11T01:48:53Z |
publishDate | 2023-06-01 |
publisher | MDPI AG |
record_format | Article |
series | World Electric Vehicle Journal |
spelling | doaj.art-428dc2ad8f09444bb3ff508ac6c8575a2023-11-18T13:06:26ZengMDPI AGWorld Electric Vehicle Journal2032-66532023-06-0114614610.3390/wevj14060146A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object DetectionHao Xu0Xiang Dong1Wenxuan Wu2Biao Yu3Hui Zhu4School of Electrical Engineering and Automation, Anhui University, Hefei 230601, ChinaSchool of Electrical Engineering and Automation, Anhui University, Hefei 230601, ChinaSchool of Electrical Engineering and Automation, Anhui University, Hefei 230601, ChinaHefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, ChinaHefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, ChinaThree-dimensional object detection plays a vital role in the field of environment perception in autonomous driving, and its results are crucial for the subsequent processes. Pillar-based 3D object detection is a method to detect objects in 3D by dividing point cloud data into pillars and extracting features from each pillar. However, the current pillar-based 3D object-detection methods suffer from problems such as “under-segmentation” and false detections in overlapping and occluded scenes. To address these challenges, we propose an improved pillar-based 3D object-detection network with a two-stage pillar feature-encoding (Ts-PFE) module that considers both inter- and intra-relational features among and in the pillars. This novel approach enhances the model’s ability to identify the local structure and global distribution of the data, which improves the distinction between objects in occluded and overlapping scenes and ultimately reduces under-segmentation and false detection problems. Furthermore, we use the attention mechanism to improve the backbone and make it focus on important features. The proposed approach is evaluated on the KITTI dataset. The experimental results show that the detection accuracy of the proposed approach are significantly improved on the benchmarks of BEV and 3D. The improvement of AP for car, pedestrian, and cyclist 3D detection are 1.1%, 3.78%, and 2.23% over PointPillars.https://www.mdpi.com/2032-6653/14/6/146point cloudautonomous vehicles3D object detectionpillarLiDAR |
spellingShingle | Hao Xu Xiang Dong Wenxuan Wu Biao Yu Hui Zhu A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object Detection World Electric Vehicle Journal point cloud autonomous vehicles 3D object detection pillar LiDAR |
title | A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object Detection |
title_full | A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object Detection |
title_fullStr | A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object Detection |
title_full_unstemmed | A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object Detection |
title_short | A Two-Stage Pillar Feature-Encoding Network for Pillar-Based 3D Object Detection |
title_sort | two stage pillar feature encoding network for pillar based 3d object detection |
topic | point cloud autonomous vehicles 3D object detection pillar LiDAR |
url | https://www.mdpi.com/2032-6653/14/6/146 |
work_keys_str_mv | AT haoxu atwostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT xiangdong atwostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT wenxuanwu atwostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT biaoyu atwostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT huizhu atwostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT haoxu twostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT xiangdong twostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT wenxuanwu twostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT biaoyu twostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection AT huizhu twostagepillarfeatureencodingnetworkforpillarbased3dobjectdetection |