ResSANet: Learning Geometric Information for Point Cloud Processing

Point clouds with rich local geometric information have potentially huge implications in several applications, especially in areas of robotic manipulation and autonomous driving. However, most point cloud processing methods cannot extract enough geometric features from a raw point cloud, which restr...

Full description

Bibliographic Details
Main Authors: Xiaojun Zhu, Zheng Zhang, Jian Ruan, Houde Liu, Hanxu Sun
Format: Article
Language:English
Published: MDPI AG 2021-05-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/9/3227
Description
Summary:Point clouds with rich local geometric information have potentially huge implications in several applications, especially in areas of robotic manipulation and autonomous driving. However, most point cloud processing methods cannot extract enough geometric features from a raw point cloud, which restricts the performance of their downstream tasks such as point cloud classification, shape retrieval and part segmentation. In this paper, the authors propose a new method where a convolution based on geometric primitives is adopted to accurately represent the elusive shape in the form of a point cloud to fully extract hidden geometric features. The key idea of the proposed approach is building a brand-new convolution net named ResSANet on the basis of geometric primitives to learn hierarchical geometry information. Two different modules are devised in our network, Res-SA and Res­SA­2, to achieve feature fusion at different levels in ResSANet. This work achieves classification accuracy up to 93.2% on the ModelNet40 dataset and the shape retrieval with an effect of 87.4%. The part segmentation experiment also achieves an accuracy of 83.3% (class mIoU) and 85.3% (instance mIoU) on ShapeNet dataset. It is worth mentioning that the number of parameters in this work is just 1.04 M while the network depth is minimal. Experimental results and comparisons with state-of-the-art methods demonstrate that our approach can achieve superior performance.
ISSN:1424-8220