Summary: | What are the implications when robots are pre-programmed to “behave ethically”? What are the implications between the other and me when lethal autonomous weapons systems by themselves target and destroy human beings? In this paper, I address these two inter-related questions through the philosophy of Emmanuel Levinas. Firstly (chapter I), I set out to clarify the meaning of “lethal autonomous weapons systems” by discussing concepts such as robot, autonomy, and by outlining the debates regarding these emerging weapons. I also develop my thought on why the philosophy of Levinas is of importance. Secondly (chapter II), I discuss two important Levinasian concepts, namely “substitution” and “thematization”. Finally (chapter III), by using Levinas' concept of substitution, I argue that the issue of making robots “behave ethically”, as some of the on-going debates on LAWS suggest, is missing the deeper question of ethics in the Levinasian sense. LAWS cannot be “ethical” or programmed to “behave ethically”. Ethics is already “there”, before the ethical behaviour, and situated in and constitutive of my very subjectivity as “for-the-other”. In light of this and together with Levinas’ understanding of thematization, I further argue that through the very conception of lethal autonomous weapons systems the ethical encounter becomes a denial, implying violence and a problematic dehumanization already at work, both towards the "I" and the other.
|