Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law

Social distrust of AI stems in part from incomplete and faulty data sources, inappropriate redeployment of data, and frequently exposed errors that reflect and amplify existing social cleavages and failures, such as racial and gender biases. Other sources of distrust include the lack of “ground trut...

Full description

Bibliographic Details
Main Authors: Dwork, C. (Author), Minow, M. (Author)
Format: Article
Language:English
Published: MIT Press Journals 2022
Online Access:View Fulltext in Publisher
LEADER 01523nam a2200145Ia 4500
001 10.1162-DAED_a_01918
008 220517s2022 CNT 000 0 und d
020 |a 00115266 (ISSN) 
245 1 0 |a Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law 
260 0 |b MIT Press Journals  |c 2022 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1162/DAED_a_01918 
520 3 |a Social distrust of AI stems in part from incomplete and faulty data sources, inappropriate redeployment of data, and frequently exposed errors that reflect and amplify existing social cleavages and failures, such as racial and gender biases. Other sources of distrust include the lack of “ground truth” against which to measure the results of learned algorithms, divergence of interests between those affected and those designing the tools, invasion of individual privacy, and the inapplicability of measures such as transparency and participation that build trust in other institutions. Needed steps to increase trust in AI systems include involvement of broader and diverse stakeholders in decisions around selection of uses, data, and predictors; investment in methods of recourse for errors and bias commensurate with the risks of errors and bias; and regulation prompting competition for trust. © 2022 by Cynthia Dwork & Martha Minow Published under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. 
700 1 |a Dwork, C.  |e author 
700 1 |a Minow, M.  |e author 
773 |t Daedalus