Makeup Presentation Attacks: Review and Detection Performance Benchmark
The application of facial cosmetics may cause substantial alterations in the facial appearance, which can degrade the performance of facial biometrics systems. Additionally, it was recently demonstrated that makeup can be abused to launch so-called makeup presentation attacks. More precisely, an att...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9293285/ |
id |
doaj-a779a66d87954f888d2922245dcd9ced |
---|---|
record_format |
Article |
spelling |
doaj-a779a66d87954f888d2922245dcd9ced2021-03-30T04:43:19ZengIEEEIEEE Access2169-35362020-01-01822495822497310.1109/ACCESS.2020.30447239293285Makeup Presentation Attacks: Review and Detection Performance BenchmarkChristian Rathgeb0https://orcid.org/0000-0003-1901-9468Pawel Drozdowski1https://orcid.org/0000-0003-4758-339XChristoph Busch2https://orcid.org/0000-0002-9159-2923da/sec – Biometrics and Internet-Security Research Group, Hochschule Darmstadt, Darmstadt, Germanyda/sec – Biometrics and Internet-Security Research Group, Hochschule Darmstadt, Darmstadt, Germanyda/sec – Biometrics and Internet-Security Research Group, Hochschule Darmstadt, Darmstadt, GermanyThe application of facial cosmetics may cause substantial alterations in the facial appearance, which can degrade the performance of facial biometrics systems. Additionally, it was recently demonstrated that makeup can be abused to launch so-called makeup presentation attacks. More precisely, an attacker might apply heavy makeup to obtain the facial appearance of a target subject with the aim of impersonation or to conceal their own identity. We provide a comprehensive survey of works related to the topic of makeup presentation attack detection, along with a critical discussion. Subsequently, we assess the vulnerability of a commercial off-the-shelf and an open-source face recognition system against makeup presentation attacks. Specifically, we focus on makeup presentation attacks with the aim of impersonation employing the publicly available Makeup Induced Face Spoofing (MIFS) and Disguised Faces in the Wild (DFW) databases. It is shown that makeup presentation attacks might seriously impact the security of face recognition systems. Further, we propose different image pair-based, i.e. differential, attack detection schemes which analyse differences in feature representations obtained from potential makeup presentation attacks and corresponding target face images. The proposed detection systems employ various types of feature extractors including texture descriptors, facial landmarks, and deep (face) representations. To distinguish makeup presentation attacks from genuine, i.e. bona fide presentations, machine learning-based classifiers are used. The classifiers are trained with a large number of synthetically generated makeup presentation attacks utilising a generative adversarial network for facial makeup transfer in conjunction with image warping. Experimental evaluations conducted using the MIFS database and a subset of the DFW database reveal that deep face representations achieve competitive detection equal error rates of 0.7% and 1.8%, respectively.https://ieeexplore.ieee.org/document/9293285/Biometricsface recognitionpresentation attack detectionmakeupmakeup attack detection |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Christian Rathgeb Pawel Drozdowski Christoph Busch |
spellingShingle |
Christian Rathgeb Pawel Drozdowski Christoph Busch Makeup Presentation Attacks: Review and Detection Performance Benchmark IEEE Access Biometrics face recognition presentation attack detection makeup makeup attack detection |
author_facet |
Christian Rathgeb Pawel Drozdowski Christoph Busch |
author_sort |
Christian Rathgeb |
title |
Makeup Presentation Attacks: Review and Detection Performance Benchmark |
title_short |
Makeup Presentation Attacks: Review and Detection Performance Benchmark |
title_full |
Makeup Presentation Attacks: Review and Detection Performance Benchmark |
title_fullStr |
Makeup Presentation Attacks: Review and Detection Performance Benchmark |
title_full_unstemmed |
Makeup Presentation Attacks: Review and Detection Performance Benchmark |
title_sort |
makeup presentation attacks: review and detection performance benchmark |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
The application of facial cosmetics may cause substantial alterations in the facial appearance, which can degrade the performance of facial biometrics systems. Additionally, it was recently demonstrated that makeup can be abused to launch so-called makeup presentation attacks. More precisely, an attacker might apply heavy makeup to obtain the facial appearance of a target subject with the aim of impersonation or to conceal their own identity. We provide a comprehensive survey of works related to the topic of makeup presentation attack detection, along with a critical discussion. Subsequently, we assess the vulnerability of a commercial off-the-shelf and an open-source face recognition system against makeup presentation attacks. Specifically, we focus on makeup presentation attacks with the aim of impersonation employing the publicly available Makeup Induced Face Spoofing (MIFS) and Disguised Faces in the Wild (DFW) databases. It is shown that makeup presentation attacks might seriously impact the security of face recognition systems. Further, we propose different image pair-based, i.e. differential, attack detection schemes which analyse differences in feature representations obtained from potential makeup presentation attacks and corresponding target face images. The proposed detection systems employ various types of feature extractors including texture descriptors, facial landmarks, and deep (face) representations. To distinguish makeup presentation attacks from genuine, i.e. bona fide presentations, machine learning-based classifiers are used. The classifiers are trained with a large number of synthetically generated makeup presentation attacks utilising a generative adversarial network for facial makeup transfer in conjunction with image warping. Experimental evaluations conducted using the MIFS database and a subset of the DFW database reveal that deep face representations achieve competitive detection equal error rates of 0.7% and 1.8%, respectively. |
topic |
Biometrics face recognition presentation attack detection makeup makeup attack detection |
url |
https://ieeexplore.ieee.org/document/9293285/ |
work_keys_str_mv |
AT christianrathgeb makeuppresentationattacksreviewanddetectionperformancebenchmark AT paweldrozdowski makeuppresentationattacksreviewanddetectionperformancebenchmark AT christophbusch makeuppresentationattacksreviewanddetectionperformancebenchmark |
_version_ |
1724181355479171072 |