Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction

Modern deep learning systems have achieved unparalleled success and several applications have significantly benefited due to these technological advancements. However, these systems have also shown vulnerabilities with strong implications on the fairness and trustability of such systems. Among these...

Full description

Bibliographic Details
Main Authors: Puspita Majumdar, Saheb Chhabra, Richa Singh, Mayank Vatsa
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-02-01
Series:Frontiers in Big Data
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fdata.2020.590296/full
id doaj-abde7500d7164ad489df768f59241ea2
record_format Article
spelling doaj-abde7500d7164ad489df768f59241ea22021-02-18T08:41:33ZengFrontiers Media S.A.Frontiers in Big Data2624-909X2021-02-01310.3389/fdata.2020.590296590296Subgroup Invariant Perturbation for Unbiased Pre-Trained Model PredictionPuspita Majumdar0Saheb Chhabra1Richa Singh2Mayank Vatsa3Department of Computer Science and Engineering, Indraprastha Institute of Information Technology, New Delhi, IndiaDepartment of Computer Science and Engineering, Indraprastha Institute of Information Technology, New Delhi, IndiaDepartment of Computer Science and Engineering, Indian Institute of Technology Jodhpur, Rajasthan, IndiaDepartment of Computer Science and Engineering, Indian Institute of Technology Jodhpur, Rajasthan, IndiaModern deep learning systems have achieved unparalleled success and several applications have significantly benefited due to these technological advancements. However, these systems have also shown vulnerabilities with strong implications on the fairness and trustability of such systems. Among these vulnerabilities, bias has been an Achilles’ heel problem. Many applications such as face recognition and language translation have shown high levels of bias in the systems towards particular demographic sub-groups. Unbalanced representation of these sub-groups in the training data is one of the primary reasons of biased behavior. To address this important challenge, we propose a two-fold contribution: a bias estimation metric termed as Precise Subgroup Equivalence to jointly measure the bias in model prediction and the overall model performance. Secondly, we propose a novel bias mitigation algorithm which is inspired from adversarial perturbation and uses the PSE metric. The mitigation algorithm learns a single uniform perturbation termed as Subgroup Invariant Perturbation which is added to the input dataset to generate a transformed dataset. The transformed dataset, when given as input to the pre-trained model reduces the bias in model prediction. Multiple experiments performed on four publicly available face datasets showcase the effectiveness of the proposed algorithm for race and gender prediction.https://www.frontiersin.org/articles/10.3389/fdata.2020.590296/fullFairnesstrustabilitybias estimationbias mitigationsubgroup invariant perturbationgender classification
collection DOAJ
language English
format Article
sources DOAJ
author Puspita Majumdar
Saheb Chhabra
Richa Singh
Mayank Vatsa
spellingShingle Puspita Majumdar
Saheb Chhabra
Richa Singh
Mayank Vatsa
Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction
Frontiers in Big Data
Fairness
trustability
bias estimation
bias mitigation
subgroup invariant perturbation
gender classification
author_facet Puspita Majumdar
Saheb Chhabra
Richa Singh
Mayank Vatsa
author_sort Puspita Majumdar
title Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction
title_short Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction
title_full Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction
title_fullStr Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction
title_full_unstemmed Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction
title_sort subgroup invariant perturbation for unbiased pre-trained model prediction
publisher Frontiers Media S.A.
series Frontiers in Big Data
issn 2624-909X
publishDate 2021-02-01
description Modern deep learning systems have achieved unparalleled success and several applications have significantly benefited due to these technological advancements. However, these systems have also shown vulnerabilities with strong implications on the fairness and trustability of such systems. Among these vulnerabilities, bias has been an Achilles’ heel problem. Many applications such as face recognition and language translation have shown high levels of bias in the systems towards particular demographic sub-groups. Unbalanced representation of these sub-groups in the training data is one of the primary reasons of biased behavior. To address this important challenge, we propose a two-fold contribution: a bias estimation metric termed as Precise Subgroup Equivalence to jointly measure the bias in model prediction and the overall model performance. Secondly, we propose a novel bias mitigation algorithm which is inspired from adversarial perturbation and uses the PSE metric. The mitigation algorithm learns a single uniform perturbation termed as Subgroup Invariant Perturbation which is added to the input dataset to generate a transformed dataset. The transformed dataset, when given as input to the pre-trained model reduces the bias in model prediction. Multiple experiments performed on four publicly available face datasets showcase the effectiveness of the proposed algorithm for race and gender prediction.
topic Fairness
trustability
bias estimation
bias mitigation
subgroup invariant perturbation
gender classification
url https://www.frontiersin.org/articles/10.3389/fdata.2020.590296/full
work_keys_str_mv AT puspitamajumdar subgroupinvariantperturbationforunbiasedpretrainedmodelprediction
AT sahebchhabra subgroupinvariantperturbationforunbiasedpretrainedmodelprediction
AT richasingh subgroupinvariantperturbationforunbiasedpretrainedmodelprediction
AT mayankvatsa subgroupinvariantperturbationforunbiasedpretrainedmodelprediction
_version_ 1724263804833890304