Summary: | 博士 === 國立臺灣大學 === 電機工程學系研究所 === 86 === Classifiers are important mechanisms of signal processing in a variety of
applications, and the advances in neural networks have added many new classifi
er models. This dissertation studies how to evaluate these neural network clas
sifiers, and uses the evaluation results to design a novel neural network clas
sifier that can learn fast and classify fast. The performance of classifiers i
s evaluated on three factors: classification accuracy, learningcomplexity, and
classification complexity. Classification accuracy is the major deciding fact
or, and only the classifiers that have the ability to achieve high classificat
ion accuracy are useful for complex and large-scale applications of today. Cha
pter 2 will discuss how to evaluate the classification accuracy of classifiers
. Classifier will be compared on classification accuracy first, and then a gro
up of excellent classifiers are further compared on learning complexity and cl
assification complexity. The comparison will show that these excellent classif
iers have equal classification accuracy, but have very different learning comp
lexity and classification accuracy. For example, the Learning Vector Quantiza
tion (LVQ) classifier is fast in learning but slow in classification, while th
e Multi-Layered Perceptron (MLP) classifier is slow in learning but fast in cl
assification. Therefore, this dissertation proposes a novel neural network cl
assifier that can balance learning complexity with classification complexity.
The neural network classifier proposed in Chapter 3 is called neural network
classification tree with intelligent search strategy (NCI) model. This model i
ntegrates neural network, classification tree, and intelligent search strateg
y so that it can be combined with existing neural network models to reduce t
he classification complexity without sacrificing classification accuracy. A no
vel clustering algorithm called perceptron clustering is proposed in Section 4
for build the classification tree of the NCI model. Perceptron clustering inc
ludes a hierarchical and agglomerative clustering procedure with the following
two features. One is that perceptrons rather than training samples are partit
ioned into clusters.The second is that class labels, which represent supervise
d information, areused. The NCI model is applied in several benchmark applicat
ions. The simulation results show that NCI can reduce the classification compl
exity of the LVQ classifier one-third to one-seventh dependent on applications
while increasing the learning complexity only a little. Therefore, the NCI mo
del proposed in this dissertation is more balanced on classification complexit
y and learning complexity than the MLP classifier and the LVQ classifier.
|