Tuning Fairness by Balancing Target Labels

The issue of fairness in machine learning models has recently attracted a lot of attention as ensuring it will ensure continued confidence of the general public in the deployment of machine learning systems. We focus on mitigating the harm incurred by a biased machine learning system that offers bet...

Full description

Bibliographic Details
Main Authors: Thomas Kehrenberg, Zexun Chen, Novi Quadrianto
Format: Article
Language:English
Published: Frontiers Media S.A. 2020-05-01
Series:Frontiers in Artificial Intelligence
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/frai.2020.00033/full
id doaj-c35a55be05b3438694dabb0484ef8902
record_format Article
spelling doaj-c35a55be05b3438694dabb0484ef89022020-11-25T02:10:14ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122020-05-01310.3389/frai.2020.00033536141Tuning Fairness by Balancing Target LabelsThomas Kehrenberg0Zexun Chen1Novi Quadrianto2Novi Quadrianto3Predictive Analytics Lab (PAL), Informatics, University of Sussex, Brighton, United KingdomPredictive Analytics Lab (PAL), Informatics, University of Sussex, Brighton, United KingdomPredictive Analytics Lab (PAL), Informatics, University of Sussex, Brighton, United KingdomNational Research University Higher School of Economics, Moscow, RussiaThe issue of fairness in machine learning models has recently attracted a lot of attention as ensuring it will ensure continued confidence of the general public in the deployment of machine learning systems. We focus on mitigating the harm incurred by a biased machine learning system that offers better outputs (e.g., loans, job interviews) for certain groups than for others. We show that bias in the output can naturally be controlled in probabilistic models by introducing a latent target output. This formulation has several advantages: first, it is a unified framework for several notions of group fairness such as Demographic Parity and Equality of Opportunity; second, it is expressed as a marginalization instead of a constrained problem; and third, it allows the encoding of our knowledge of what unbiased outputs should be. Practically, the second allows us to avoid unstable constrained optimization procedures and to reuse off-the-shelf toolboxes. The latter translates to the ability to control the level of fairness by directly varying fairness target rates. In contrast, existing approaches rely on intermediate, arguably unintuitive, control parameters such as covariance thresholds.https://www.frontiersin.org/article/10.3389/frai.2020.00033/fullalgorithmic biasfairnessmachine learningdemographic parityequality of opportunity
collection DOAJ
language English
format Article
sources DOAJ
author Thomas Kehrenberg
Zexun Chen
Novi Quadrianto
Novi Quadrianto
spellingShingle Thomas Kehrenberg
Zexun Chen
Novi Quadrianto
Novi Quadrianto
Tuning Fairness by Balancing Target Labels
Frontiers in Artificial Intelligence
algorithmic bias
fairness
machine learning
demographic parity
equality of opportunity
author_facet Thomas Kehrenberg
Zexun Chen
Novi Quadrianto
Novi Quadrianto
author_sort Thomas Kehrenberg
title Tuning Fairness by Balancing Target Labels
title_short Tuning Fairness by Balancing Target Labels
title_full Tuning Fairness by Balancing Target Labels
title_fullStr Tuning Fairness by Balancing Target Labels
title_full_unstemmed Tuning Fairness by Balancing Target Labels
title_sort tuning fairness by balancing target labels
publisher Frontiers Media S.A.
series Frontiers in Artificial Intelligence
issn 2624-8212
publishDate 2020-05-01
description The issue of fairness in machine learning models has recently attracted a lot of attention as ensuring it will ensure continued confidence of the general public in the deployment of machine learning systems. We focus on mitigating the harm incurred by a biased machine learning system that offers better outputs (e.g., loans, job interviews) for certain groups than for others. We show that bias in the output can naturally be controlled in probabilistic models by introducing a latent target output. This formulation has several advantages: first, it is a unified framework for several notions of group fairness such as Demographic Parity and Equality of Opportunity; second, it is expressed as a marginalization instead of a constrained problem; and third, it allows the encoding of our knowledge of what unbiased outputs should be. Practically, the second allows us to avoid unstable constrained optimization procedures and to reuse off-the-shelf toolboxes. The latter translates to the ability to control the level of fairness by directly varying fairness target rates. In contrast, existing approaches rely on intermediate, arguably unintuitive, control parameters such as covariance thresholds.
topic algorithmic bias
fairness
machine learning
demographic parity
equality of opportunity
url https://www.frontiersin.org/article/10.3389/frai.2020.00033/full
work_keys_str_mv AT thomaskehrenberg tuningfairnessbybalancingtargetlabels
AT zexunchen tuningfairnessbybalancingtargetlabels
AT noviquadrianto tuningfairnessbybalancingtargetlabels
AT noviquadrianto tuningfairnessbybalancingtargetlabels
_version_ 1724920028357197824