Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body Figure
Person re-identification is the task of recognizing a subject across different non-overlapping cameras across different views and times. Most state-of-the-art datasets and proposed solutions tend to address the problem of short-term re-identification. Those models can re-identify a person as long as...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-03-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/12/6/2835 |
_version_ | 1797447215926476800 |
---|---|
author | Nadeen Shoukry Mohamed A. Abd El Ghany Mohammed A.-M. Salem |
author_facet | Nadeen Shoukry Mohamed A. Abd El Ghany Mohammed A.-M. Salem |
author_sort | Nadeen Shoukry |
collection | DOAJ |
description | Person re-identification is the task of recognizing a subject across different non-overlapping cameras across different views and times. Most state-of-the-art datasets and proposed solutions tend to address the problem of short-term re-identification. Those models can re-identify a person as long as they are wearing the same clothes. The work presented in this paper addresses the task of long-term re-identification. Therefore, the proposed model is trained on a dataset that incorporates clothes variation. This paper proposes a multi-modal person re-identification model. The first modality includes soft bio-metrics: hair, face, neck, shoulders, and part of the chest. The second modality is the remaining body figure that mainly focuses on clothes. The proposed model is composed of two separate neural networks, one for each modality. For the first modality, a two-stream Siamese network with pre-trained FaceNet as a feature extractor for the first modality is utilized. Part-based Convolutional Baseline classifier with a feature extractor network OSNet for the second modality. Experiments confirm that the proposed model can outperform several state-of-the-art models achieving 81.4 % accuracy on Rank-1, 82.3% accuracy on Rank-5, 83.1% accuracy on Rank-10, and 83.7% accuracy on Rank-20. |
first_indexed | 2024-03-09T13:52:39Z |
format | Article |
id | doaj.art-ffc2159e366d4da0afac68ef4cf9dd70 |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-03-09T13:52:39Z |
publishDate | 2022-03-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-ffc2159e366d4da0afac68ef4cf9dd702023-11-30T20:49:02ZengMDPI AGApplied Sciences2076-34172022-03-01126283510.3390/app12062835Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body FigureNadeen Shoukry0Mohamed A. Abd El Ghany1Mohammed A.-M. Salem2Media Engineering and Technology Department, The German University in Cairo, Cairo 11835, EgyptInformation Engineering and Technology Department, The German University in Cairo, Cairo 11835, EgyptDigital Media Engineering and Technology Department, The German University in Cairo, Cairo 11835, EgyptPerson re-identification is the task of recognizing a subject across different non-overlapping cameras across different views and times. Most state-of-the-art datasets and proposed solutions tend to address the problem of short-term re-identification. Those models can re-identify a person as long as they are wearing the same clothes. The work presented in this paper addresses the task of long-term re-identification. Therefore, the proposed model is trained on a dataset that incorporates clothes variation. This paper proposes a multi-modal person re-identification model. The first modality includes soft bio-metrics: hair, face, neck, shoulders, and part of the chest. The second modality is the remaining body figure that mainly focuses on clothes. The proposed model is composed of two separate neural networks, one for each modality. For the first modality, a two-stream Siamese network with pre-trained FaceNet as a feature extractor for the first modality is utilized. Part-based Convolutional Baseline classifier with a feature extractor network OSNet for the second modality. Experiments confirm that the proposed model can outperform several state-of-the-art models achieving 81.4 % accuracy on Rank-1, 82.3% accuracy on Rank-5, 83.1% accuracy on Rank-10, and 83.7% accuracy on Rank-20.https://www.mdpi.com/2076-3417/12/6/2835FaceNetlong-term person re-identificationOSNetPCBPRCC datasetSiamese network |
spellingShingle | Nadeen Shoukry Mohamed A. Abd El Ghany Mohammed A.-M. Salem Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body Figure Applied Sciences FaceNet long-term person re-identification OSNet PCB PRCC dataset Siamese network |
title | Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body Figure |
title_full | Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body Figure |
title_fullStr | Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body Figure |
title_full_unstemmed | Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body Figure |
title_short | Multi-Modal Long-Term Person Re-Identification Using Physical Soft Bio-Metrics and Body Figure |
title_sort | multi modal long term person re identification using physical soft bio metrics and body figure |
topic | FaceNet long-term person re-identification OSNet PCB PRCC dataset Siamese network |
url | https://www.mdpi.com/2076-3417/12/6/2835 |
work_keys_str_mv | AT nadeenshoukry multimodallongtermpersonreidentificationusingphysicalsoftbiometricsandbodyfigure AT mohamedaabdelghany multimodallongtermpersonreidentificationusingphysicalsoftbiometricsandbodyfigure AT mohammedamsalem multimodallongtermpersonreidentificationusingphysicalsoftbiometricsandbodyfigure |