Summary: | 碩士 === 實踐大學 === 資訊科技與管理學系碩士班 === 97 === Abstract
The artificial neural networks are engineering structures that imitate the human brains and neural systems necessary in learning. They have been used to many practical applications and achieved good results. Multilayer neural networks are the most commonly used neural network structures. One difficulty of the multilayer neural networks is the catastrophic forgetting problem. Catastrophic forgetting is a behavior when new information overwrites the old one. And in such a way, the old information is no longer usable.
To solve this problem, a GA-based incremental learning structure was developed in this research. It aimed to implement a learning structure, which can acquire new knowledge without interfering with old one. This structure learns new and advanced skills utilizing already learned techniques. Once a network has learned an elementary skill, it can learn other more complicated skills and still be able to perform the task learned earlier.
To demonstrate the feasibility of the proposed learning structure, three classification problems from the UCI Machine Learning Repository were tested. Simulations results show that over 90% of the test data can be correctly classified in all of the problems. This paper showed that the learning structure could solve the catastrophic forgetting problems. Attractive features of the new approach include module structure, system expansibility, requirement of less hardware resources, and potential faster learning.
Keywords: Incremental Learning, Genetic Algorithms, Multilayer Neural Networks, Classification Problems, Catastrophic Forgetting
|