Summary: | Are agency and responsibility solely ascribable to humans? The advent of artificial intelligence (AI), including the development of so-called “affective computing,” appears to be chipping away at the traditional building blocks of moral agency and responsibility. Spurred by the realization that fully autonomous, self-aware, even rational and emotionally-intelligent computer systems may emerge in the future, professionals in engineering and computer science have historically been the most vocal to warn of the ways in which such systems may alter our understanding of computer ethics. Despite the increasing attention of many philosophers and ethicists to the development of AI, there continues to exist a fair amount of conceptual muddiness on the conditions for assigning agency and responsibility to such systems, from both an ethical and a legal perspective. Moral and legal philosophies may overlap to a high degree, but are neither interchangeable nor identical. This paper attempts to clarify the actual and hypothetical ethical and legal situations governing a very particular type of advanced, or “intelligent,” computer system: medical decision support systems (MDSS) that feature AI in their system design. While it is well-recognized that MDSS can be categorized by type and function, further categorization of their mediating effects on users and patients is needed in order to even begin ascribing some level of moral or legal responsibility. I conclude that various doctrines of Anglo legal systems appear to allow for the possibility of assigning specific types of agency – and thus specific types of legal responsibility – to some types of MDSS. Strong arguments for assigning moral agency and responsibility are still lacking, however.
|