Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law

Social distrust of AI stems in part from incomplete and faulty data sources, inappropriate redeployment of data, and frequently exposed errors that reflect and amplify existing social cleavages and failures, such as racial and gender biases. Other sources of distrust include the lack of “ground trut...

Full description

Bibliographic Details
Main Authors: Dwork, C. (Author), Minow, M. (Author)
Format: Article
Language:English
Published: MIT Press Journals 2022
Online Access:View Fulltext in Publisher
Description
Summary:Social distrust of AI stems in part from incomplete and faulty data sources, inappropriate redeployment of data, and frequently exposed errors that reflect and amplify existing social cleavages and failures, such as racial and gender biases. Other sources of distrust include the lack of “ground truth” against which to measure the results of learned algorithms, divergence of interests between those affected and those designing the tools, invasion of individual privacy, and the inapplicability of measures such as transparency and participation that build trust in other institutions. Needed steps to increase trust in AI systems include involvement of broader and diverse stakeholders in decisions around selection of uses, data, and predictors; investment in methods of recourse for errors and bias commensurate with the risks of errors and bias; and regulation prompting competition for trust. © 2022 by Cynthia Dwork & Martha Minow Published under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
ISBN:00115266 (ISSN)
DOI:10.1162/DAED_a_01918