Local-global aware-transformer for occluded person re-identification

Recently, security protection is import in many scenarios. Occluded person re-identification (Re-ID) involves identifying obscured pedestrians from images captured by multiple cameras, even when the images are partially or fully occluded. Many state-of-the-art models for occluded Re-ID utilize auxil...

Full description

Bibliographic Details
Main Authors: Jing Liu, Guoqing Zhou
Format: Article
Language:English
Published: Elsevier 2023-12-01
Series:Alexandria Engineering Journal
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1110016823009730
Description
Summary:Recently, security protection is import in many scenarios. Occluded person re-identification (Re-ID) involves identifying obscured pedestrians from images captured by multiple cameras, even when the images are partially or fully occluded. Many state-of-the-art models for occluded Re-ID utilize auxiliary modules such as pose estimation, feature pyramid, and graph matching to address occlusion challenges. However, this approach results in complex models that struggle to generalize to diverse occlusions and may not effectively handle non-occluded pedestrians. Furthermore, real-world Re-ID applications frequently involve both occluded and non-occluded pedestrians, making it difficult to develop versatile models. To tackle these issues, we introduce a novel Re-ID model that learns discriminative features on both local and global scales for occluded pedestrian identification. Our proposed model, the Local-aware Transformer (LAT) for occluded person Re-ID, comprises three modules: a Discriminative Feature Extraction Module (DFEM), a Local Feature Extraction Module (LFEM), and a Global Feature Extraction Module (GFEM). Our experimental results on three occluded and two general Re-ID benchmarks demonstrate that our model surpasses existing state-of-the-art methods and achieves exceptional performance in both occluded and non-occluded Re-ID tasks.
ISSN:1110-0168