Summary: | In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious <i>responsibility</i>” of us as persons, versus “the fast-unconscious <i>responsiveness</i>” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both <i>accountable</i> for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.
|