Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data

Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining (DADM) and fairness, accounta...

Full description

Bibliographic Details
Main Authors: Michael Veale, Reuben Binns
Format: Article
Language:English
Published: SAGE Publishing 2017-11-01
Series:Big Data & Society
Online Access:https://doi.org/10.1177/2053951717743530