Adversarial attacks on fingerprint liveness detection
Abstract Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progre...
Main Authors: | Jianwei Fei, Zhihua Xia, Peipeng Yu, Fengjun Xiao |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2020-01-01
|
Series: | EURASIP Journal on Image and Video Processing |
Subjects: | |
Online Access: | https://doi.org/10.1186/s13640-020-0490-z |
Similar Items
-
Transformers and Generative Adversarial Networks for Liveness Detection in Multitarget Fingerprint Sensors
by: Soha B. Sandouka, et al.
Published: (2021-01-01) -
Unified Generative Adversarial Networks for Multidomain Fingerprint Presentation Attack Detection
by: Soha B. Sandouka, et al.
Published: (2021-08-01) -
A Study of Adversarial Attacks and Detection on Deep Learning-Based Plant Disease Identification
by: Zhirui Luo, et al.
Published: (2021-02-01) -
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
by: Naveed Akhtar, et al.
Published: (2018-01-01) -
End-to-End Deep Learning Fusion of Fingerprint and Electrocardiogram Signals for Presentation Attack Detection
by: Rami M. Jomaa, et al.
Published: (2020-04-01)