Problematic Aspects of the Use of Expert Systems in Law

Summary. The usage of expert systems in law brings many problematic questions. Complexity and intricacy of law, combined with limited possibilities of information technologies makes it difficult to create flawlessly working expert systems. In this article the author analyses problematic aspects rela...

Full description

Bibliographic Details
Main Author: Marius Kalinauskas
Format: Article
Language:English
Published: Mykolas Romeris University 2011-08-01
Series:Social Technologies
Subjects:
law
Online Access:http://www.mruni.eu/en/mokslo_darbai/st/archyvas/dwn.php?id=293689
Description
Summary:Summary. The usage of expert systems in law brings many problematic questions. Complexity and intricacy of law, combined with limited possibilities of information technologies makes it difficult to create flawlessly working expert systems. In this article the author analyses problematic aspects related to expert system usage in law. Comparisons of various research are made according to analysis of scientific articles. The author analyses practical difficulties of legal norm representation, creation of expert knowledge ontology, expert systems liability issues. Legal responsibility of expert system developers, users, and owners are also covered in this paper. Creation of legal ontologies is a complicated process because of the nature of the subject itself and the complexity and quantity of knowledge which must be represented in order to have fully functional legal expert system. Legal information basically consists of legal norms, doctrine, precedents and expert knowledge. All of these areas have specific representation issues, but the most difficult part is to make ontology and representation of expert knowledge. Different experts may have distinct points of view in some similar cases. Human decisions are made not only by applying certain rules to the problem decision pattern. Providence, analytical skills and critical thinking is required in legal professional work. Human reasoning and decision-making is not only based on symbolic values, it also consists of intermediate symbolic assumptions. So the question is: is it possible to give a clear structure to something which has no permanent state? The other problem which is analyzed in this article is artificial reasoning methods, which are basically different forms of pattern recognition with some specific methods applied to them. The second part of the paper analyses the liability of expert systems. Nowadays expert systems can’t be legally responsible for their decisions. They lack intellectual potential in order to gain rights and obligations. They are not a legal subject, so they cannot be responsible for their actions. However, the evolution of intellectualized systems is rather slow but steady (compared to other computer sciences). So there is a possibility that expert system may have the ability to gain their rights and obligations in the future. There is some doubt as to whether intellectualized systems will be able to gain status similar to a human legal subject. Therefore some quasi-subject status may be applied to these systems in the future. Legal expert systems are applied nowadays in legal practice, and they do make mistakes. Who is responsible for these mistakes and who should be held liable for the negative consequences? It’s a hard question to answer, because of many factors which may cause the poor performance of the system. While there is no direct legal responsibility for expert systems, but damage done by their decisions may be real, the situation when there is no responsibility, subjects may occur. Some researchers claim that expert systems help improve the juridical quality of legal process, but the risk of mistake is always there, so intellectualized systems must be used only as advisers, but not as a justice implementation instruments.
ISSN:2029-7564