Backdoor attacks in neural networks

Neural networks have emerged as a powerful tool in the field of artificial intelligence and machine learning. Inspired by the structure and functionality of the human brain, neural networks are computational models composed of interconnected nodes, or "neurons," that work collaborati...

全面介绍

书目详细资料
主要作者: Low, Wen Wen
其他作者: Zhang Tianwei
格式: Final Year Project (FYP)
语言:English
出版: Nanyang Technological University 2023
主题:
在线阅读:https://hdl.handle.net/10356/171934
实物特征
总结:Neural networks have emerged as a powerful tool in the field of artificial intelligence and machine learning. Inspired by the structure and functionality of the human brain, neural networks are computational models composed of interconnected nodes, or "neurons," that work collaboratively to process and analyse data. By learning from vast amounts of labelled examples, neural networks can recognize patterns, make predictions, and solve complex tasks with remarkable accuracy. With the increasing adoption of neural networks in various domains, ensuring their robustness and security has become a critical concern. This project explores the concept of backdoor attacks in neural networks. Backdoor attacks involve the deliberate insertion of hidden triggers into the learning process of a neural network model, compromising its integrity and reliability. The project aims to understand the mechanisms and vulnerabilities that enable backdoor attacks and investigates defence strategies to mitigate their impact. Through experiments and analysis, this FYP aims to contribute to the development of robust defence mechanisms that enhance the security of neural network models against backdoor attacks, ensuring their trustworthiness and reliability in critical applications.