Protecting neural networks from adversarial attacks

Deep learning has become very popular in recent years and naturally, there are rising concerns about protecting the Intellectual Property (IP) rights of these models. Building and training deep learning models, such as Convolutional Neural Networks (CNNs), require in-depth technical expertise, compu...

Full description

Bibliographic Details
Main Author: Lim, Xin Yi
Other Authors: Anupam Chattopadhyay
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175191
_version_ 1826114113276739584
author Lim, Xin Yi
author2 Anupam Chattopadhyay
author_facet Anupam Chattopadhyay
Lim, Xin Yi
author_sort Lim, Xin Yi
collection NTU
description Deep learning has become very popular in recent years and naturally, there are rising concerns about protecting the Intellectual Property (IP) rights of these models. Building and training deep learning models, such as Convolutional Neural Networks (CNNs), require in-depth technical expertise, computational resources, large amounts of data, and time. Hence, the motivation to prevent the theft of such valuable models. There exist two robust frameworks to do so, namely watermarking and locking. Watermarking allows validation of the original ownership of a model, whereas locking aims to encrypt the model such that only authorized access can produce accurate results. This report presents a workflow applying both watermarking and locking techniques to various image classification models and shows how both techniques can work hand in hand without compromising the model’s performance.
first_indexed 2024-10-01T03:33:52Z
format Final Year Project (FYP)
id ntu-10356/175191
institution Nanyang Technological University
language English
last_indexed 2024-10-01T03:33:52Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1751912024-04-19T15:42:38Z Protecting neural networks from adversarial attacks Lim, Xin Yi Anupam Chattopadhyay School of Computer Science and Engineering anupam@ntu.edu.sg Computer and Information Science Neural networks Deep learning has become very popular in recent years and naturally, there are rising concerns about protecting the Intellectual Property (IP) rights of these models. Building and training deep learning models, such as Convolutional Neural Networks (CNNs), require in-depth technical expertise, computational resources, large amounts of data, and time. Hence, the motivation to prevent the theft of such valuable models. There exist two robust frameworks to do so, namely watermarking and locking. Watermarking allows validation of the original ownership of a model, whereas locking aims to encrypt the model such that only authorized access can produce accurate results. This report presents a workflow applying both watermarking and locking techniques to various image classification models and shows how both techniques can work hand in hand without compromising the model’s performance. Bachelor's degree 2024-04-19T13:00:12Z 2024-04-19T13:00:12Z 2024 Final Year Project (FYP) Lim, X. Y. (2024). Protecting neural networks from adversarial attacks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175191 https://hdl.handle.net/10356/175191 en SCSE23-0259 application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Neural networks
Lim, Xin Yi
Protecting neural networks from adversarial attacks
title Protecting neural networks from adversarial attacks
title_full Protecting neural networks from adversarial attacks
title_fullStr Protecting neural networks from adversarial attacks
title_full_unstemmed Protecting neural networks from adversarial attacks
title_short Protecting neural networks from adversarial attacks
title_sort protecting neural networks from adversarial attacks
topic Computer and Information Science
Neural networks
url https://hdl.handle.net/10356/175191
work_keys_str_mv AT limxinyi protectingneuralnetworksfromadversarialattacks