Secure text based CAPTCHA system adversarial examples

Recent developments in the field of Deep Learning(DL) have made it much easier to solve complex artificial intelligence problems. While many fields have benefited from this development, it is not particularly good news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and H...

Full description

Bibliographic Details
Main Author: Kant Mannan
Other Authors: Jun Zhao
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148112
_version_ 1811680254792564736
author Kant Mannan
author2 Jun Zhao
author_facet Jun Zhao
Kant Mannan
author_sort Kant Mannan
collection NTU
description Recent developments in the field of Deep Learning(DL) have made it much easier to solve complex artificial intelligence problems. While many fields have benefited from this development, it is not particularly good news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), as their sole purpose is being threatened by DL based attacks. Such attacks can easily break through the CAPTCHA with significant training[1]. On the contrary, despite the high capacity of Deep Neural Networks(DNNs) it has been observed that they can be misled by small adversarial perturbations leading to misclassification[2][3]. We have come up with a user friendly CAPTCHA generation method called Secure Adversarial CAPTCHAs(SAC) to make them stronger and more robust against the aforementioned attacks while still continuing to be easily understandable by humans. In the following project report, we will explain how we have taken advantage of the vulnerability of DNN based attacks against adversarial perturbations in order to develop the said product. We start by synthesizing a random font with an adversarial background resulting in an intermediate adversarial CAPTCHA. This intermediate result is then passed on to a highly transferable adversarial attack which helps in optimizing and making the CAPTCHA more secure and robust. Lastly, we have performed rigorous testing on SAC with experiments covering a couple of popular DNN models, GoogLeNet and ResNet50. Our experiments have shown considerable promise regarding the usability and robustness of SAC against a variety of different attacks and scenarios.
first_indexed 2024-10-01T03:22:08Z
format Final Year Project (FYP)
id ntu-10356/148112
institution Nanyang Technological University
language English
last_indexed 2024-10-01T03:22:08Z
publishDate 2021
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1481122021-04-23T14:33:57Z Secure text based CAPTCHA system adversarial examples Kant Mannan Jun Zhao School of Computer Science and Engineering junzhao@ntu.edu.sg Engineering::Computer science and engineering Recent developments in the field of Deep Learning(DL) have made it much easier to solve complex artificial intelligence problems. While many fields have benefited from this development, it is not particularly good news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), as their sole purpose is being threatened by DL based attacks. Such attacks can easily break through the CAPTCHA with significant training[1]. On the contrary, despite the high capacity of Deep Neural Networks(DNNs) it has been observed that they can be misled by small adversarial perturbations leading to misclassification[2][3]. We have come up with a user friendly CAPTCHA generation method called Secure Adversarial CAPTCHAs(SAC) to make them stronger and more robust against the aforementioned attacks while still continuing to be easily understandable by humans. In the following project report, we will explain how we have taken advantage of the vulnerability of DNN based attacks against adversarial perturbations in order to develop the said product. We start by synthesizing a random font with an adversarial background resulting in an intermediate adversarial CAPTCHA. This intermediate result is then passed on to a highly transferable adversarial attack which helps in optimizing and making the CAPTCHA more secure and robust. Lastly, we have performed rigorous testing on SAC with experiments covering a couple of popular DNN models, GoogLeNet and ResNet50. Our experiments have shown considerable promise regarding the usability and robustness of SAC against a variety of different attacks and scenarios. Bachelor of Engineering (Computer Science) 2021-04-23T14:33:57Z 2021-04-23T14:33:57Z 2021 Final Year Project (FYP) Kant Mannan (2021). Secure text based CAPTCHA system adversarial examples. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148112 https://hdl.handle.net/10356/148112 en SCSE20-0290 application/pdf Nanyang Technological University
spellingShingle Engineering::Computer science and engineering
Kant Mannan
Secure text based CAPTCHA system adversarial examples
title Secure text based CAPTCHA system adversarial examples
title_full Secure text based CAPTCHA system adversarial examples
title_fullStr Secure text based CAPTCHA system adversarial examples
title_full_unstemmed Secure text based CAPTCHA system adversarial examples
title_short Secure text based CAPTCHA system adversarial examples
title_sort secure text based captcha system adversarial examples
topic Engineering::Computer science and engineering
url https://hdl.handle.net/10356/148112
work_keys_str_mv AT kantmannan securetextbasedcaptchasystemadversarialexamples