Backdoor attacks in neural networks

As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative,...

Full description

Bibliographic Details
Main Author: Liew, Sher Yun
Other Authors: Zhang Tianwei
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175146
_version_ 1824454280035172352
author Liew, Sher Yun
author2 Zhang Tianwei
author_facet Zhang Tianwei
Liew, Sher Yun
author_sort Liew, Sher Yun
collection NTU
description As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative, has brought about a pressing concern: the potential for disastrous consequences arising from malicious backdoor attacks in neural networks. To determine the effects and limitations of these attacks, this project aims to conduct a comprehensive examination of 2 previously proposed backdoor attack strategies, namely Blended and Blind backdoors, along with 2 previously proposed backdoor defence mechanisms, namely Neural Cleanse and Spectral Signatures. An exhaustive review of pertinent research literature was performed. Additionally, experiments were carried out to test the effectiveness of these strategies.
first_indexed 2025-02-19T03:19:48Z
format Final Year Project (FYP)
id ntu-10356/175146
institution Nanyang Technological University
language English
last_indexed 2025-02-19T03:19:48Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1751462024-04-26T15:41:06Z Backdoor attacks in neural networks Liew, Sher Yun Zhang Tianwei School of Computer Science and Engineering tianwei.zhang@ntu.edu.sg Computer and Information Science Cyber security As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative, has brought about a pressing concern: the potential for disastrous consequences arising from malicious backdoor attacks in neural networks. To determine the effects and limitations of these attacks, this project aims to conduct a comprehensive examination of 2 previously proposed backdoor attack strategies, namely Blended and Blind backdoors, along with 2 previously proposed backdoor defence mechanisms, namely Neural Cleanse and Spectral Signatures. An exhaustive review of pertinent research literature was performed. Additionally, experiments were carried out to test the effectiveness of these strategies. Bachelor's degree 2024-04-22T05:47:49Z 2024-04-22T05:47:49Z 2024 Final Year Project (FYP) Liew, S. Y. (2024). Backdoor attacks in neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175146 https://hdl.handle.net/10356/175146 en application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Cyber security
Liew, Sher Yun
Backdoor attacks in neural networks
title Backdoor attacks in neural networks
title_full Backdoor attacks in neural networks
title_fullStr Backdoor attacks in neural networks
title_full_unstemmed Backdoor attacks in neural networks
title_short Backdoor attacks in neural networks
title_sort backdoor attacks in neural networks
topic Computer and Information Science
Cyber security
url https://hdl.handle.net/10356/175146
work_keys_str_mv AT liewsheryun backdoorattacksinneuralnetworks