Abstraction and Prediction Algorithms: A Harm-reduction Framework

ProPublica's allegations, that an algorithmic tool used to predict re-offenders is "biased against blacks", met a wave of criticism from the wider community. Researchers have since shown a trade-off between accuracy and fairness, concluding that the risk tool, COMPAS, was not inherent...

Full description

Bibliographic Details
Main Author: Desilvestro, Adrian (Author)
Other Authors: Ryan, Matthew (Contributor), Skov, Peer (Contributor)
Format: Others
Published: Auckland University of Technology, 2020-04-14T23:57:29Z.
Subjects:
Online Access:Get fulltext
Description
Summary:ProPublica's allegations, that an algorithmic tool used to predict re-offenders is "biased against blacks", met a wave of criticism from the wider community. Researchers have since shown a trade-off between accuracy and fairness, concluding that the risk tool, COMPAS, was not inherently discriminatory. However, in light of ProPublica's objections, a growing body of literature on assessing fairness in machine learning systems has taken flight. Performance criteria combine quantitative and qualitative elements, so users 'preferences' are hard to specify objectively. This study explores a Pareto frontier framework to illustrate the relative model (in)efficiencies that arise in Risk Prediction Instruments (RPIs). The research follows a logistic framework for estimating recidivism risk, and the design parameters include the choice of fairness constraints and the choice of a bin scoring system (the "bin number"). This dissertation presents three experiments where decision-makers can improve performance in their RPIs: (1) improving efficiency through a relaxed version of the constraint, (2) improving efficiency through 'cost-free' constraint implementation, and (3) improving efficiency through a revised scoring system.