Deep forgery discriminator via image degradation analysis

Abstract Generative adversarial network‐based deep generative model is widely applied in creating hyper‐realistic face‐swapping images and videos. However, its malicious use has posed a great threat to online contents, thus making detecting the authenticity of images and videos a tricky task. Most o...

Full description

Bibliographic Details
Main Authors: Miaomiao Yu, Jun Zhang, Shuohao Li, Jun Lei, Fenglei Wang, Hao Zhou
Format: Article
Language:English
Published: Wiley 2021-09-01
Series:IET Image Processing
Online Access:https://doi.org/10.1049/ipr2.12234
id doaj-8b6efaef0bc04d46b8ac2f4fd0c975eb
record_format Article
spelling doaj-8b6efaef0bc04d46b8ac2f4fd0c975eb2021-08-06T14:58:10ZengWileyIET Image Processing1751-96591751-96672021-09-0115112478249310.1049/ipr2.12234Deep forgery discriminator via image degradation analysisMiaomiao Yu0Jun Zhang1Shuohao Li2Jun Lei3Fenglei Wang4Hao Zhou5Science and Technology on Information Systems Engineering Laboratory National University of Defense Technology Changsha ChinaScience and Technology on Information Systems Engineering Laboratory National University of Defense Technology Changsha ChinaScience and Technology on Information Systems Engineering Laboratory National University of Defense Technology Changsha ChinaScience and Technology on Information Systems Engineering Laboratory National University of Defense Technology Changsha ChinaScience and Technology on Information Systems Engineering Laboratory National University of Defense Technology Changsha ChinaScience and Technology on Information Systems Engineering Laboratory National University of Defense Technology Changsha ChinaAbstract Generative adversarial network‐based deep generative model is widely applied in creating hyper‐realistic face‐swapping images and videos. However, its malicious use has posed a great threat to online contents, thus making detecting the authenticity of images and videos a tricky task. Most of the existing detection methods are only suitable for one type of forgery and only work for low‐quality tampered images, restricting their applications. This paper concerns the construction of a novel discriminator with better comprehensive capabilities. Through analysis of the visual characteristics of manipulated images from the perspective of image quality, it is revealed that the synthesized face does have different degrees of quality degradation compared to the source content. Therefore, several kinds of image quality‐related handicraft features are extracted, including texture, sharpness, frequency domain features, and deep features, to unveil the inconsistent information and modification traces in the fake faces. In this way, a 1065‐dimensional vector of each image is obtained through multi‐feature fusion, and it is then fed into RF to train a targeted binary classification detector. Extensive experiments have shown that the proposed scheme is superior to the previous methods in recognition accuracy on multiple manipulation databases including the Celeb‐DF database with better visual quality.https://doi.org/10.1049/ipr2.12234
collection DOAJ
language English
format Article
sources DOAJ
author Miaomiao Yu
Jun Zhang
Shuohao Li
Jun Lei
Fenglei Wang
Hao Zhou
spellingShingle Miaomiao Yu
Jun Zhang
Shuohao Li
Jun Lei
Fenglei Wang
Hao Zhou
Deep forgery discriminator via image degradation analysis
IET Image Processing
author_facet Miaomiao Yu
Jun Zhang
Shuohao Li
Jun Lei
Fenglei Wang
Hao Zhou
author_sort Miaomiao Yu
title Deep forgery discriminator via image degradation analysis
title_short Deep forgery discriminator via image degradation analysis
title_full Deep forgery discriminator via image degradation analysis
title_fullStr Deep forgery discriminator via image degradation analysis
title_full_unstemmed Deep forgery discriminator via image degradation analysis
title_sort deep forgery discriminator via image degradation analysis
publisher Wiley
series IET Image Processing
issn 1751-9659
1751-9667
publishDate 2021-09-01
description Abstract Generative adversarial network‐based deep generative model is widely applied in creating hyper‐realistic face‐swapping images and videos. However, its malicious use has posed a great threat to online contents, thus making detecting the authenticity of images and videos a tricky task. Most of the existing detection methods are only suitable for one type of forgery and only work for low‐quality tampered images, restricting their applications. This paper concerns the construction of a novel discriminator with better comprehensive capabilities. Through analysis of the visual characteristics of manipulated images from the perspective of image quality, it is revealed that the synthesized face does have different degrees of quality degradation compared to the source content. Therefore, several kinds of image quality‐related handicraft features are extracted, including texture, sharpness, frequency domain features, and deep features, to unveil the inconsistent information and modification traces in the fake faces. In this way, a 1065‐dimensional vector of each image is obtained through multi‐feature fusion, and it is then fed into RF to train a targeted binary classification detector. Extensive experiments have shown that the proposed scheme is superior to the previous methods in recognition accuracy on multiple manipulation databases including the Celeb‐DF database with better visual quality.
url https://doi.org/10.1049/ipr2.12234
work_keys_str_mv AT miaomiaoyu deepforgerydiscriminatorviaimagedegradationanalysis
AT junzhang deepforgerydiscriminatorviaimagedegradationanalysis
AT shuohaoli deepforgerydiscriminatorviaimagedegradationanalysis
AT junlei deepforgerydiscriminatorviaimagedegradationanalysis
AT fengleiwang deepforgerydiscriminatorviaimagedegradationanalysis
AT haozhou deepforgerydiscriminatorviaimagedegradationanalysis
_version_ 1721219026039865344