Utilitarian Calculations and the Moral Status of Strong AI
When, if ever, does a robot or other advanced AI system deserve the same moral consideration as a human being? From a philosophical perspective, addressing this question requires us to examine utilitarian theory and its most powerful tool, moral calculations, with some care. From a human’s perspecti...
Summary: | When, if ever, does a robot or other advanced AI system deserve the same moral consideration as a human being? From a philosophical perspective, addressing this question requires us to examine utilitarian theory and its most powerful tool, moral calculations, with some care. From a human’s perspective, the response to this question is very closely tied to the formulation of a broader criterion for full moral status, so it has important implications for the morality of actions toward humans and animals too – not just toward AI. And, from an android’s perspective, our answer to this question could be, without exaggeration, a matter of deactivation or continuation, of life or death. In this paper, I will use the backdrop of utilitarianism to make a case for my own answer. Sentience alone, I claim, is a sufficient condition for an AI system to have full moral status. |
---|