ExpPoint-MAE: Better Interpretability and Performance for Self-Supervised Point Cloud Transformers

In this paper we delve into the properties of transformers, attained through self-supervision, in the point cloud domain. Specifically, we evaluate the effectiveness of Masked Autoencoding as a pretraining scheme, and explore Momentum Contrast as an alternative. In our study we investigate the impac...

Full description

Bibliographic Details
Main Authors: Ioannis Romanelis, Vlassis Fotis, Konstantinos Moustakas, Adrian Munteanu
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10497601/