Detecting adversarial samples for deep neural networks through mutation testing
Deep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input sam...
Main Author: | Tan, Kye Yen |
---|---|
Other Authors: | Chang Chip Hong |
Format: | Final Year Project (FYP) |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/138719 |
Similar Items
-
Towards deep neural networks robust to adversarial examples
by: Matyasko, Alexander
Published: (2020) -
Adversarial robustness of deep reinforcement learning
by: Qu, Xinghua
Published: (2022) -
Using deep neural networks for chess position evaluation
by: Phang, Benito Yan Feng
Published: (2023) -
Stock trading prediction using deep learning neural networks
by: Ong, Hao Cong
Published: (2021) -
Benchmarking novel graph neural networks
by: Bhagwat, Abhishek
Published: (2021)