Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the...
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2020-07-01
|
Series: | Patterns |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S266638992030060X |
id |
doaj-920c851bbbe54ab89face4da8331403e |
---|---|
record_format |
Article |
spelling |
doaj-920c851bbbe54ab89face4da8331403e2020-11-25T04:09:05ZengElsevierPatterns2666-38992020-07-0114100049Rapid Trust Calibration through Interpretable and Uncertainty-Aware AIRichard Tomsett0Alun Preece1Dave Braines2Federico Cerutti3Supriyo Chakraborty4Mani Srivastava5Gavin Pearson6Lance Kaplan7Emerging Technology, IBM Research Europe, Hursley Park Road, Hursley SO21 2JN, UK; Corresponding authorCrime and Security Research Institute, Cardiff University, Friary House, Greyfriars Road, Cardiff CF10 3AE, UKEmerging Technology, IBM Research Europe, Hursley Park Road, Hursley SO21 2JN, UKCrime and Security Research Institute, Cardiff University, Friary House, Greyfriars Road, Cardiff CF10 3AE, UK; Dipartimento di Ingegneria dell'Informazione, Università degli Studi di Brescia, Via Branze 38, Brescia 25123, ItalyIBM Research, IBM Thomas J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598, USANetworked and Embedded Systems Laboratory, Electrical and Computer Engineering Department, University of California, Los Angeles, 420 Westwood Plaza, Los Angeles, CA 90095-1594, USADefence Science and Technology Laboratory, Porton Down, Salisbury, Wiltshire SP4 0JQ, UKCCDC Army Research Laboratory, 2800 Powder Mill Road, Adelphi, MD 20783, USAArtificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research. The Bigger Picture: This article is about artificial intelligence (AI) used to inform high-stakes decisions, such as those arising in legal, healthcare, or military contexts. Users must have an understanding of the capabilities and limitations of an AI system when making high-stakes decisions. Usually this requires the user to interact with the system and learn over time how it behaves in different circumstances. We propose that long-term interaction would not be necessary for an AI system with the properties of interpretability and uncertainty awareness. Interpretability makes clear what the system “knows” while uncertainty awareness reveals what the system does not “know.” This allows the user to rapidly calibrate their trust in the system's outputs, spotting flaws in its reasoning or seeing when it is unsure. We illustrate these concepts in the context of a military coalition operation, where decision makers may be using AI systems with which they are unfamiliar and which are operating in rapidly changing environments. We review current research in these areas, considering both technical and human factors challenges, and propose a framework for future work based on Lasswell's communication model.http://www.sciencedirect.com/science/article/pii/S266638992030060XAIartificial intelligencemachine learningtrustinterpretabilityexplanation |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Richard Tomsett Alun Preece Dave Braines Federico Cerutti Supriyo Chakraborty Mani Srivastava Gavin Pearson Lance Kaplan |
spellingShingle |
Richard Tomsett Alun Preece Dave Braines Federico Cerutti Supriyo Chakraborty Mani Srivastava Gavin Pearson Lance Kaplan Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI Patterns AI artificial intelligence machine learning trust interpretability explanation |
author_facet |
Richard Tomsett Alun Preece Dave Braines Federico Cerutti Supriyo Chakraborty Mani Srivastava Gavin Pearson Lance Kaplan |
author_sort |
Richard Tomsett |
title |
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI |
title_short |
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI |
title_full |
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI |
title_fullStr |
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI |
title_full_unstemmed |
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI |
title_sort |
rapid trust calibration through interpretable and uncertainty-aware ai |
publisher |
Elsevier |
series |
Patterns |
issn |
2666-3899 |
publishDate |
2020-07-01 |
description |
Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research. The Bigger Picture: This article is about artificial intelligence (AI) used to inform high-stakes decisions, such as those arising in legal, healthcare, or military contexts. Users must have an understanding of the capabilities and limitations of an AI system when making high-stakes decisions. Usually this requires the user to interact with the system and learn over time how it behaves in different circumstances. We propose that long-term interaction would not be necessary for an AI system with the properties of interpretability and uncertainty awareness. Interpretability makes clear what the system “knows” while uncertainty awareness reveals what the system does not “know.” This allows the user to rapidly calibrate their trust in the system's outputs, spotting flaws in its reasoning or seeing when it is unsure. We illustrate these concepts in the context of a military coalition operation, where decision makers may be using AI systems with which they are unfamiliar and which are operating in rapidly changing environments. We review current research in these areas, considering both technical and human factors challenges, and propose a framework for future work based on Lasswell's communication model. |
topic |
AI artificial intelligence machine learning trust interpretability explanation |
url |
http://www.sciencedirect.com/science/article/pii/S266638992030060X |
work_keys_str_mv |
AT richardtomsett rapidtrustcalibrationthroughinterpretableanduncertaintyawareai AT alunpreece rapidtrustcalibrationthroughinterpretableanduncertaintyawareai AT davebraines rapidtrustcalibrationthroughinterpretableanduncertaintyawareai AT federicocerutti rapidtrustcalibrationthroughinterpretableanduncertaintyawareai AT supriyochakraborty rapidtrustcalibrationthroughinterpretableanduncertaintyawareai AT manisrivastava rapidtrustcalibrationthroughinterpretableanduncertaintyawareai AT gavinpearson rapidtrustcalibrationthroughinterpretableanduncertaintyawareai AT lancekaplan rapidtrustcalibrationthroughinterpretableanduncertaintyawareai |
_version_ |
1724423433490530304 |