Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation

Trust calibration for a human–machine team is the process by which a human adjusts their expectations of the automation’s reliability and trustworthiness; adaptive support for trust calibration is needed to engender appropriate reliance on automation. Herein, we leverage an instance-based learning A...

Full description

Bibliographic Details
Main Authors: Christian Lebiere, Leslie M. Blaha, Corey K. Fallon, Brett Jefferson
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-05-01
Series:Frontiers in Robotics and AI
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frobt.2021.652776/full
id doaj-95bd9f3546c243bdbb8a5785e8466280
record_format Article
spelling doaj-95bd9f3546c243bdbb8a5785e84662802021-05-24T06:03:20ZengFrontiers Media S.A.Frontiers in Robotics and AI2296-91442021-05-01810.3389/frobt.2021.652776652776Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in AutomationChristian Lebiere0Leslie M. Blaha1Corey K. Fallon2Brett Jefferson3Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States711th Human Performance Wing, Air Force Research Laboratory, Pittsburgh, PA, United StatesPacific Northwest National Laboratory, Richland, WA, United StatesPacific Northwest National Laboratory, Richland, WA, United StatesTrust calibration for a human–machine team is the process by which a human adjusts their expectations of the automation’s reliability and trustworthiness; adaptive support for trust calibration is needed to engender appropriate reliance on automation. Herein, we leverage an instance-based learning ACT-R cognitive model of decisions to obtain and rely on an automated assistant for visual search in a UAV interface. This cognitive model matches well with the human predictive power statistics measuring reliance decisions; we obtain from the model an internal estimate of automation reliability that mirrors human subjective ratings. The model is able to predict the effect of various potential disruptions, such as environmental changes or particular classes of adversarial intrusions on human trust in automation. Finally, we consider the use of model predictions to improve automation transparency that account for human cognitive biases in order to optimize the bidirectional interaction between human and machine through supporting trust calibration. The implications of our findings for the design of reliable and trustworthy automation are discussed.https://www.frontiersin.org/articles/10.3389/frobt.2021.652776/fullcognitive architecturesACT-Rtrust in automationautomation transparencytrust calibrationhuman–machine teaming
collection DOAJ
language English
format Article
sources DOAJ
author Christian Lebiere
Leslie M. Blaha
Corey K. Fallon
Brett Jefferson
spellingShingle Christian Lebiere
Leslie M. Blaha
Corey K. Fallon
Brett Jefferson
Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation
Frontiers in Robotics and AI
cognitive architectures
ACT-R
trust in automation
automation transparency
trust calibration
human–machine teaming
author_facet Christian Lebiere
Leslie M. Blaha
Corey K. Fallon
Brett Jefferson
author_sort Christian Lebiere
title Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation
title_short Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation
title_full Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation
title_fullStr Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation
title_full_unstemmed Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation
title_sort adaptive cognitive mechanisms to maintain calibrated trust and reliance in automation
publisher Frontiers Media S.A.
series Frontiers in Robotics and AI
issn 2296-9144
publishDate 2021-05-01
description Trust calibration for a human–machine team is the process by which a human adjusts their expectations of the automation’s reliability and trustworthiness; adaptive support for trust calibration is needed to engender appropriate reliance on automation. Herein, we leverage an instance-based learning ACT-R cognitive model of decisions to obtain and rely on an automated assistant for visual search in a UAV interface. This cognitive model matches well with the human predictive power statistics measuring reliance decisions; we obtain from the model an internal estimate of automation reliability that mirrors human subjective ratings. The model is able to predict the effect of various potential disruptions, such as environmental changes or particular classes of adversarial intrusions on human trust in automation. Finally, we consider the use of model predictions to improve automation transparency that account for human cognitive biases in order to optimize the bidirectional interaction between human and machine through supporting trust calibration. The implications of our findings for the design of reliable and trustworthy automation are discussed.
topic cognitive architectures
ACT-R
trust in automation
automation transparency
trust calibration
human–machine teaming
url https://www.frontiersin.org/articles/10.3389/frobt.2021.652776/full
work_keys_str_mv AT christianlebiere adaptivecognitivemechanismstomaintaincalibratedtrustandrelianceinautomation
AT lesliemblaha adaptivecognitivemechanismstomaintaincalibratedtrustandrelianceinautomation
AT coreykfallon adaptivecognitivemechanismstomaintaincalibratedtrustandrelianceinautomation
AT brettjefferson adaptivecognitivemechanismstomaintaincalibratedtrustandrelianceinautomation
_version_ 1721428788854652928