PPE: Point position embedding for single object tracking in point clouds

Abstract Existing 3D single object tracking methods primarily extract features from the global coordinates of point clouds, overlooking the potential exploitation of their positional information. However, due to the unordered, sparse, and irregular nature of point clouds, effectively exploring their...

Full description

Bibliographic Details
Main Authors: Yuanzhi Su, Yuan‐Gen Wang, Weijia Wang, Guopu Zhu
Format: Article
Language:English
Published: Wiley 2023-08-01
Series:Electronics Letters
Subjects:
Online Access:https://doi.org/10.1049/ell2.12914
_version_ 1797745134409875456
author Yuanzhi Su
Yuan‐Gen Wang
Weijia Wang
Guopu Zhu
author_facet Yuanzhi Su
Yuan‐Gen Wang
Weijia Wang
Guopu Zhu
author_sort Yuanzhi Su
collection DOAJ
description Abstract Existing 3D single object tracking methods primarily extract features from the global coordinates of point clouds, overlooking the potential exploitation of their positional information. However, due to the unordered, sparse, and irregular nature of point clouds, effectively exploring their positional information presents a significant challenge. In this letter, the network is explicitly reformulated by introducing a point position embedding module in conjunction with a self‐attention coding module, replacing the use of global coordinate inputs. The proposed reformulation is further integrated into a top‐notch model M2‐Track, called Point Position Embedding (PPE) in this letter. Comprehensive empirical analysis are performed on the KITTI and NuScenes datasets. Experimental results show that the PPE surpasses M2‐Track by a large margin in overall performance. Especially for the challenging NuScenes dataset, the method attains the highest precision and success in all classes compared to state‐of‐the‐art methods. The code is available at https://github.com/GZHU‐DVL/PPE.
first_indexed 2024-03-12T15:19:08Z
format Article
id doaj.art-a1e86da11bba47928219beb94e6f26c1
institution Directory Open Access Journal
issn 0013-5194
1350-911X
language English
last_indexed 2024-03-12T15:19:08Z
publishDate 2023-08-01
publisher Wiley
record_format Article
series Electronics Letters
spelling doaj.art-a1e86da11bba47928219beb94e6f26c12023-08-11T07:18:29ZengWileyElectronics Letters0013-51941350-911X2023-08-015915n/an/a10.1049/ell2.12914PPE: Point position embedding for single object tracking in point cloudsYuanzhi Su0Yuan‐Gen Wang1Weijia Wang2Guopu Zhu3School of Computer Science and Cyber Engineering Guangzhou University GuangzhouChinaSchool of Computer Science and Cyber Engineering Guangzhou University GuangzhouChinaSchool of Information Technology Deakin University Waurn Ponds CampusGeelongAustraliaSchool of Cyberspace SecurityHarbin Institute of TechnologyHarbinChinaAbstract Existing 3D single object tracking methods primarily extract features from the global coordinates of point clouds, overlooking the potential exploitation of their positional information. However, due to the unordered, sparse, and irregular nature of point clouds, effectively exploring their positional information presents a significant challenge. In this letter, the network is explicitly reformulated by introducing a point position embedding module in conjunction with a self‐attention coding module, replacing the use of global coordinate inputs. The proposed reformulation is further integrated into a top‐notch model M2‐Track, called Point Position Embedding (PPE) in this letter. Comprehensive empirical analysis are performed on the KITTI and NuScenes datasets. Experimental results show that the PPE surpasses M2‐Track by a large margin in overall performance. Especially for the challenging NuScenes dataset, the method attains the highest precision and success in all classes compared to state‐of‐the‐art methods. The code is available at https://github.com/GZHU‐DVL/PPE.https://doi.org/10.1049/ell2.12914computer visionimage motion analysisimage recognitionobject tracking
spellingShingle Yuanzhi Su
Yuan‐Gen Wang
Weijia Wang
Guopu Zhu
PPE: Point position embedding for single object tracking in point clouds
Electronics Letters
computer vision
image motion analysis
image recognition
object tracking
title PPE: Point position embedding for single object tracking in point clouds
title_full PPE: Point position embedding for single object tracking in point clouds
title_fullStr PPE: Point position embedding for single object tracking in point clouds
title_full_unstemmed PPE: Point position embedding for single object tracking in point clouds
title_short PPE: Point position embedding for single object tracking in point clouds
title_sort ppe point position embedding for single object tracking in point clouds
topic computer vision
image motion analysis
image recognition
object tracking
url https://doi.org/10.1049/ell2.12914
work_keys_str_mv AT yuanzhisu ppepointpositionembeddingforsingleobjecttrackinginpointclouds
AT yuangenwang ppepointpositionembeddingforsingleobjecttrackinginpointclouds
AT weijiawang ppepointpositionembeddingforsingleobjecttrackinginpointclouds
AT guopuzhu ppepointpositionembeddingforsingleobjecttrackinginpointclouds