Summary: | 碩士 === 國立臺灣科技大學 === 管理技術研究所 === 86 === Classification is an important area in pattern recognition. Feature extra ction for classification is equivalent to retaining informative features or eliminating redundant features. However, due to the nonlinearity of the decision boundary, which occurs in most cases, there exist no absolutely but approxima tely redundant features. Eliminating approximately redundant features results in a decrease in the classification accuracy. Even for two classes with multiv ariate normal distributions, classification accuracy is difficult to analyze s ince the classification function involves quadratic terms. One approach to all eviating this difficulty is to simultaneously diagonalize the covariance matri ces of the two classes which can be achieved by applying orthornormal and whit ening transformations to the measurement space. Once the covariance matrices are simultaneously diagonalized, the quadratic classification function is simplified and becomes much easier to analyze and the classification accuracy can be studied in terms of the eigenvalues of the covariance matrices of the two classes. Thus, the decrease in the classification accuracy incurred from eliminating approximately redundant features can be quantified.We empirically study the classification accuracy by varying the distributionparameters.
|