Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
Abstract Background Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a pati...
Main Authors: | Lamia Alam, Shane Mueller |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2021-06-01
|
Series: | BMC Medical Informatics and Decision Making |
Online Access: | https://doi.org/10.1186/s12911-021-01542-6 |
Similar Items
-
Reciprocal Explanations : An Explanation Technique for Human-AI Partnership in Design Ideation
by: Hegemann, Lena
Published: (2020) -
Explainable AI Metrics and Properties for Evaluation and Analysis of Counterfactual Explanations : Explainable AI Metrics and Properties for Evaluation and Analysis of Counterfactual Explanations
by: Singh, Vandita
Published: (2021) -
Can We Trust AI?
Published: (2022) -
The evaluation of diagnostic explanations for inconsistencies
by: Paolo Legrenzi, et al.
Published: (2005-03-01) -
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
by: Elvio Amparore, et al.
Published: (2021-04-01)