When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games
The actions of intelligent agents, such as chatbots, recommender systems, and virtual assistants are typically not fully transparent to the user. Consequently, users take the risk that such agents act in ways opposed to the users’ preferences or goals. It is often argued that people use trust as a c...
Main Authors: | Han, T.A (Author), Perret, C. (Author), Powers, S.T (Author) |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier B.V.
2021
|
Subjects: | |
Online Access: | View Fulltext in Publisher |
Similar Items
-
Game Analysis of Access Control Based on User Behavior Trust
by: Yan Wang, et al.
Published: (2019-04-01) -
Note on Complete Proof of Axelrod’s Theorem
by: Takashi SHIMIZU, et al.
Published: (2003-10-01) -
Examining Spillovers between Long and Short Repeated Prisoner’s Dilemma Games Played in the Laboratory
by: Antonio A. Arechar, et al.
Published: (2018-01-01) -
A single ‘weight-lifting’ game covers all kinds of games
by: Tatsuki Yamamoto, et al.
Published: (2019-11-01) -
Topics in the emergence of cooperation in competing games.
Published: (2008)