Explicit Content Detection System: An Approach towards a Safe and Ethical Environment
An explicit content detection (ECD) system to detect Not Suitable For Work (NSFW) media (i.e., image/ video) content is proposed. The proposed ECD system is based on residual network (i.e., deep learning model) which returns a probability to indicate the explicitness in media content. The value is f...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2018-01-01
|
Series: | Applied Computational Intelligence and Soft Computing |
Online Access: | http://dx.doi.org/10.1155/2018/1463546 |
id |
doaj-9fd54ebf5a1d4104a8b8084fb0218c36 |
---|---|
record_format |
Article |
spelling |
doaj-9fd54ebf5a1d4104a8b8084fb0218c362020-11-24T22:08:06ZengHindawi LimitedApplied Computational Intelligence and Soft Computing1687-97241687-97322018-01-01201810.1155/2018/14635461463546Explicit Content Detection System: An Approach towards a Safe and Ethical EnvironmentAli Qamar Bhatti0Muhammad Umer1Syed Hasan Adil2Mansoor Ebrahim3Daniyal Nawaz4Faizan Ahmed5Iqra University, PakistanIqra University, PakistanIqra University, PakistanSunway University, MalaysiaIqra University, PakistanIqra University, PakistanAn explicit content detection (ECD) system to detect Not Suitable For Work (NSFW) media (i.e., image/ video) content is proposed. The proposed ECD system is based on residual network (i.e., deep learning model) which returns a probability to indicate the explicitness in media content. The value is further compared with a defined threshold to decide whether the content is explicit or nonexplicit. The proposed system not only differentiates between explicit/nonexplicit contents but also indicates the degree of explicitness in any media content, i.e., high, medium, or low. In addition, the system also identifies the media files with tampered extension and label them as suspicious. The experimental result shows that the proposed model provides an accuracy of ~ 95% when tested on our image and video datasets.http://dx.doi.org/10.1155/2018/1463546 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ali Qamar Bhatti Muhammad Umer Syed Hasan Adil Mansoor Ebrahim Daniyal Nawaz Faizan Ahmed |
spellingShingle |
Ali Qamar Bhatti Muhammad Umer Syed Hasan Adil Mansoor Ebrahim Daniyal Nawaz Faizan Ahmed Explicit Content Detection System: An Approach towards a Safe and Ethical Environment Applied Computational Intelligence and Soft Computing |
author_facet |
Ali Qamar Bhatti Muhammad Umer Syed Hasan Adil Mansoor Ebrahim Daniyal Nawaz Faizan Ahmed |
author_sort |
Ali Qamar Bhatti |
title |
Explicit Content Detection System: An Approach towards a Safe and Ethical Environment |
title_short |
Explicit Content Detection System: An Approach towards a Safe and Ethical Environment |
title_full |
Explicit Content Detection System: An Approach towards a Safe and Ethical Environment |
title_fullStr |
Explicit Content Detection System: An Approach towards a Safe and Ethical Environment |
title_full_unstemmed |
Explicit Content Detection System: An Approach towards a Safe and Ethical Environment |
title_sort |
explicit content detection system: an approach towards a safe and ethical environment |
publisher |
Hindawi Limited |
series |
Applied Computational Intelligence and Soft Computing |
issn |
1687-9724 1687-9732 |
publishDate |
2018-01-01 |
description |
An explicit content detection (ECD) system to detect Not Suitable For Work (NSFW) media (i.e., image/ video) content is proposed. The proposed ECD system is based on residual network (i.e., deep learning model) which returns a probability to indicate the explicitness in media content. The value is further compared with a defined threshold to decide whether the content is explicit or nonexplicit. The proposed system not only differentiates between explicit/nonexplicit contents but also indicates the degree of explicitness in any media content, i.e., high, medium, or low. In addition, the system also identifies the media files with tampered extension and label them as suspicious. The experimental result shows that the proposed model provides an accuracy of ~ 95% when tested on our image and video datasets. |
url |
http://dx.doi.org/10.1155/2018/1463546 |
work_keys_str_mv |
AT aliqamarbhatti explicitcontentdetectionsystemanapproachtowardsasafeandethicalenvironment AT muhammadumer explicitcontentdetectionsystemanapproachtowardsasafeandethicalenvironment AT syedhasanadil explicitcontentdetectionsystemanapproachtowardsasafeandethicalenvironment AT mansoorebrahim explicitcontentdetectionsystemanapproachtowardsasafeandethicalenvironment AT daniyalnawaz explicitcontentdetectionsystemanapproachtowardsasafeandethicalenvironment AT faizanahmed explicitcontentdetectionsystemanapproachtowardsasafeandethicalenvironment |
_version_ |
1725817761895022592 |