Light Attack: A Physical World Real-Time Attack Against Object Classifiers

It is well known that deep neural networks (DNNs) are vulnerable to adversarial examples. In the digital world, most of the existing work makes classifiers or detectors fail by adding perturbations that are imperceptible to humans. In the physical world, existing work mostly invalidates classifiers...

Full description

Bibliographic Details
Main Authors: Ruizhe Hu, Ting Rui, Yan Ouyang, Jinkang Wang, Qunyan Jiang, Yinan Du
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9791340/