Designing adversarial signals against three deep learning and non-deep learning methods

The widespread adoption of machine learning, especially Deep Neural Networks (DNNs) in daily life, causes great concern about its security property. Szegedy et al.'s study showed that DNNs are vulnerable to adversarial examples, which are images with small, deliberately designed perturbations....

Full description

Bibliographic Details
Main Author: Huang, Yi
Other Authors: Lam Kwok Yan
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/151718