Reflection machines: increasing meaningful human control over Decision Support Systems

Rapid developments in Artificial Intelligence are leading to an increasing human reliance on machine decision making. Even in collaborative efforts with Decision Support Systems (DSSs), where a human expert is expected to make the final decisions, it can be hard to keep the expert actively involved...

Full description

Bibliographic Details
Main Authors: Cornelissen, N.A.J (Author), Haselager, W.F.G (Author), Schraffenberger, H.K (Author), van Eerdt, R.J.M (Author)
Format: Article
Language:English
Published: Springer Science and Business Media B.V. 2022
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02451nam a2200397Ia 4500
001 10.1007-s10676-022-09645-y
008 220425s2022 CNT 000 0 und d
020 |a 13881957 (ISSN) 
245 1 0 |a Reflection machines: increasing meaningful human control over Decision Support Systems 
260 0 |b Springer Science and Business Media B.V.  |c 2022 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1007/s10676-022-09645-y 
520 3 |a Rapid developments in Artificial Intelligence are leading to an increasing human reliance on machine decision making. Even in collaborative efforts with Decision Support Systems (DSSs), where a human expert is expected to make the final decisions, it can be hard to keep the expert actively involved throughout the decision process. DSSs suggest their own solutions and thus invite passive decision making. To keep humans actively ‘on’ the decision-making loop and counter overreliance on machines, we propose a ‘reflection machine’ (RM). This system asks users questions about their decision strategy and thereby prompts them to evaluate their own decisions critically. We discuss what forms RMs can take and present a proof-of-concept implementation of a RM that can produce feedback on users’ decisions in the medical and law domains. We show that the prototype requires very little domain knowledge to create reasonably intelligent critiquing questions. With this prototype, we demonstrate the technical feasibility to develop RMs and hope to pave the way for future research into their effectiveness and value. © 2022, The Author(s). 
650 0 4 |a AI ethic 
650 0 4 |a AI ethics 
650 0 4 |a Decision making 
650 0 4 |a Decision support systems 
650 0 4 |a Decision Support Systems 
650 0 4 |a Decisions makings 
650 0 4 |a Domain Knowledge 
650 0 4 |a Final decision 
650 0 4 |a Human control 
650 0 4 |a Human expert 
650 0 4 |a Human machine interaction 
650 0 4 |a Human-machine interaction 
650 0 4 |a Machine decisions 
650 0 4 |a Meaningful human control 
650 0 4 |a Meaningful human control 
650 0 4 |a On-machines 
650 0 4 |a Philosophical aspects 
650 0 4 |a Responsibility gap 
650 0 4 |a Responsibility gap 
700 1 |a Cornelissen, N.A.J.  |e author 
700 1 |a Haselager, W.F.G.  |e author 
700 1 |a Schraffenberger, H.K.  |e author 
700 1 |a van Eerdt, R.J.M.  |e author 
773 |t Ethics and Information Technology