Protecting neural networks from adversarial attacks

Under the umbrella of Technology, there has been a rising interest in the following topics, Artificial Intelligence (AI), Machine Learning and Neural Networks over the recent years [4]. Neural Networks have been engaged by many in an attempt to do problem solving in machine learning tasks across var...

Full description

Bibliographic Details
Main Author: Kwek, Jia Ying
Other Authors: Anupam Chattopadhyay
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/137937
Description
Summary:Under the umbrella of Technology, there has been a rising interest in the following topics, Artificial Intelligence (AI), Machine Learning and Neural Networks over the recent years [4]. Neural Networks have been engaged by many in an attempt to do problem solving in machine learning tasks across various industrial domains. With the rising popularity and deployment of neural networks, it brings about security issues. Adversarial attacks on neural networks has become one of the major concerns as the attacks will result in the neural network to mis-classify or mis-predict. Therefore, this project is a research study on the defending techniques to protect the neural network from adversarial attacks. Cryptographic techniques will be looked into as well since they can also serve as another form of protection for the trained network.