Adversarial attacks on fingerprint liveness detection
Abstract Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progre...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2020-01-01
|
Series: | EURASIP Journal on Image and Video Processing |
Subjects: | |
Online Access: | https://doi.org/10.1186/s13640-020-0490-z |
_version_ | 1818392988081979392 |
---|---|
author | Jianwei Fei Zhihua Xia Peipeng Yu Fengjun Xiao |
author_facet | Jianwei Fei Zhihua Xia Peipeng Yu Fengjun Xiao |
author_sort | Jianwei Fei |
collection | DOAJ |
description | Abstract Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications. |
first_indexed | 2024-12-14T05:38:09Z |
format | Article |
id | doaj.art-23677acf3ee34f488b68f39265d59d8f |
institution | Directory Open Access Journal |
issn | 1687-5281 |
language | English |
last_indexed | 2024-12-14T05:38:09Z |
publishDate | 2020-01-01 |
publisher | SpringerOpen |
record_format | Article |
series | EURASIP Journal on Image and Video Processing |
spelling | doaj.art-23677acf3ee34f488b68f39265d59d8f2022-12-21T23:15:06ZengSpringerOpenEURASIP Journal on Image and Video Processing1687-52812020-01-012020111110.1186/s13640-020-0490-zAdversarial attacks on fingerprint liveness detectionJianwei Fei0Zhihua Xia1Peipeng Yu2Fengjun Xiao3Jiangsu Engineering Center of Network Monitoring, Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, School of Computer and Software, Nanjing University of Information Science and TechnologyJiangsu Engineering Center of Network Monitoring, Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, School of Computer and Software, Nanjing University of Information Science and TechnologyJiangsu Engineering Center of Network Monitoring, Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, School of Computer and Software, Nanjing University of Information Science and TechnologyHangzhou Dianzi UniversityAbstract Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications.https://doi.org/10.1186/s13640-020-0490-zDeep learningFingerprint liveness detectionAdversarial attacks |
spellingShingle | Jianwei Fei Zhihua Xia Peipeng Yu Fengjun Xiao Adversarial attacks on fingerprint liveness detection EURASIP Journal on Image and Video Processing Deep learning Fingerprint liveness detection Adversarial attacks |
title | Adversarial attacks on fingerprint liveness detection |
title_full | Adversarial attacks on fingerprint liveness detection |
title_fullStr | Adversarial attacks on fingerprint liveness detection |
title_full_unstemmed | Adversarial attacks on fingerprint liveness detection |
title_short | Adversarial attacks on fingerprint liveness detection |
title_sort | adversarial attacks on fingerprint liveness detection |
topic | Deep learning Fingerprint liveness detection Adversarial attacks |
url | https://doi.org/10.1186/s13640-020-0490-z |
work_keys_str_mv | AT jianweifei adversarialattacksonfingerprintlivenessdetection AT zhihuaxia adversarialattacksonfingerprintlivenessdetection AT peipengyu adversarialattacksonfingerprintlivenessdetection AT fengjunxiao adversarialattacksonfingerprintlivenessdetection |