Adversarial metric attack and defense for person re-identification

Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability o...

Full description

Bibliographic Details
Main Authors: Bai, S, Li, Y, Zhou, Y, Li, Q, Torr, PHS
Format: Journal article
Language:English
Published: IEEE 2020
_version_ 1797071439603433472
author Bai, S
Li, Y
Zhou, Y
Li, Q
Torr, PHS
author_facet Bai, S
Li, Y
Zhou, Y
Li, Q
Torr, PHS
author_sort Bai, S
collection OXFORD
description Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.
first_indexed 2024-03-06T22:53:15Z
format Journal article
id oxford-uuid:5f811a20-11fe-48e4-8182-a9f4f47cb5b9
institution University of Oxford
language English
last_indexed 2024-03-06T22:53:15Z
publishDate 2020
publisher IEEE
record_format dspace
spelling oxford-uuid:5f811a20-11fe-48e4-8182-a9f4f47cb5b92022-03-26T17:47:22ZAdversarial metric attack and defense for person re-identificationJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:5f811a20-11fe-48e4-8182-a9f4f47cb5b9EnglishSymplectic ElementsIEEE2020Bai, SLi, YZhou, YLi, QTorr, PHSPerson re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.
spellingShingle Bai, S
Li, Y
Zhou, Y
Li, Q
Torr, PHS
Adversarial metric attack and defense for person re-identification
title Adversarial metric attack and defense for person re-identification
title_full Adversarial metric attack and defense for person re-identification
title_fullStr Adversarial metric attack and defense for person re-identification
title_full_unstemmed Adversarial metric attack and defense for person re-identification
title_short Adversarial metric attack and defense for person re-identification
title_sort adversarial metric attack and defense for person re identification
work_keys_str_mv AT bais adversarialmetricattackanddefenseforpersonreidentification
AT liy adversarialmetricattackanddefenseforpersonreidentification
AT zhouy adversarialmetricattackanddefenseforpersonreidentification
AT liq adversarialmetricattackanddefenseforpersonreidentification
AT torrphs adversarialmetricattackanddefenseforpersonreidentification