Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

Abstract Background Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a pati...

Full description

Bibliographic Details
Main Authors: Lamia Alam, Shane Mueller
Format: Article
Language:English
Published: BMC 2021-06-01
Series:BMC Medical Informatics and Decision Making
Online Access:https://doi.org/10.1186/s12911-021-01542-6
id doaj-3c97658d0e164b20ad4359af89e33281
record_format Article
spelling doaj-3c97658d0e164b20ad4359af89e332812021-06-06T11:46:24ZengBMCBMC Medical Informatics and Decision Making1472-69472021-06-0121111510.1186/s12911-021-01542-6Examining the effect of explanation on satisfaction and trust in AI diagnostic systemsLamia Alam0Shane Mueller1Michigan Technological UniversityMichigan Technological UniversityAbstract Background Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. Method It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. Results Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial “global” explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. Conclusion These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain.https://doi.org/10.1186/s12911-021-01542-6
collection DOAJ
language English
format Article
sources DOAJ
author Lamia Alam
Shane Mueller
spellingShingle Lamia Alam
Shane Mueller
Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
BMC Medical Informatics and Decision Making
author_facet Lamia Alam
Shane Mueller
author_sort Lamia Alam
title Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_short Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_full Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_fullStr Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_full_unstemmed Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_sort examining the effect of explanation on satisfaction and trust in ai diagnostic systems
publisher BMC
series BMC Medical Informatics and Decision Making
issn 1472-6947
publishDate 2021-06-01
description Abstract Background Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. Method It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. Results Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial “global” explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. Conclusion These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain.
url https://doi.org/10.1186/s12911-021-01542-6
work_keys_str_mv AT lamiaalam examiningtheeffectofexplanationonsatisfactionandtrustinaidiagnosticsystems
AT shanemueller examiningtheeffectofexplanationonsatisfactionandtrustinaidiagnosticsystems
_version_ 1721393572527210496