Reduction Techniques for Training Support Vector Machines
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 90 === Recently two kinds of reduction techniques which aimed at saving training time for SVM problems with nonlinear kernels were proposed. Instead of solving the standard SVM formulation, these methods explicitly alter the SVM formulation, and solutions f...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2002
|
Online Access: | http://ndltd.ncl.edu.tw/handle/60505360685138998940 |
id |
ndltd-TW-090NTU00392029 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-090NTU003920292015-10-13T14:38:19Z http://ndltd.ncl.edu.tw/handle/60505360685138998940 Reduction Techniques for Training Support Vector Machines 支向機的簡約技術 Kuan-ming Lin 林冠明 碩士 國立臺灣大學 資訊工程學研究所 90 Recently two kinds of reduction techniques which aimed at saving training time for SVM problems with nonlinear kernels were proposed. Instead of solving the standard SVM formulation, these methods explicitly alter the SVM formulation, and solutions for them are used to classify data. The first approach, reduced support vector machine (RSVM), preselects a subset of data as support vectors and solves a smaller optimization problem. The second approach uses imcomplete Cholesky factorization (ICF) to obtain a low-rank approximation of the kernel matrix. We find that several issues of their practical uses have not been fully discussed yet. For example, we do not know if they possess comparable generalization ability as the standard SVM. In addition, we would like to see for how large problems they outperform SVM on training time. In this thesis we show that the formulation of each technique is already in a form of linear SVM and discuss several suitable implementations. Experiments indicate that in general the test accuracy of both techniques is a little lower than that of the standard SVM. In addition, for problems with up to tens of thousands of data, if the percentage of support vectors is not high, existing implementations for SVM is quite competitive on the training time. Thus, the two techniques will be mainly useful for either larger problems or those with many support vectors. Experiments in this thesis also serve as comparisons of (1) different implementations for linear SVM; (2) standard SVM using linear and quadratic cost functions; and (3) two ICF algorithms for positive definite dense matrices. Chih-Jen Lin 林智仁 2002 學位論文 ; thesis 62 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 90 === Recently two kinds of reduction techniques which aimed at saving training time for SVM problems with nonlinear kernels were proposed. Instead of solving the standard SVM formulation, these methods explicitly alter the SVM formulation, and solutions for them are used to classify data. The first approach, reduced support vector machine (RSVM), preselects a subset of data as support vectors and solves a smaller optimization problem. The second approach uses imcomplete Cholesky factorization (ICF) to obtain a low-rank approximation of the kernel matrix.
We find that several issues of their practical uses have
not been fully discussed yet. For example, we do not know
if they possess comparable generalization ability as the standard SVM. In addition, we would like to see for how large problems they outperform SVM on training time. In this thesis we show that the formulation of each technique is already in a form of linear SVM and discuss several suitable implementations.
Experiments indicate that in general the test accuracy of both techniques is a little lower than that of the standard SVM. In addition, for problems with up to tens of thousands of data,
if the percentage of support vectors is not high, existing implementations for SVM is quite competitive on the training time. Thus, the two techniques will be mainly useful for either larger problems or those with many support vectors. Experiments in this thesis also serve as comparisons of (1) different implementations for linear SVM; (2) standard SVM using linear and quadratic cost functions; and (3) two ICF algorithms for positive definite dense matrices.
|
author2 |
Chih-Jen Lin |
author_facet |
Chih-Jen Lin Kuan-ming Lin 林冠明 |
author |
Kuan-ming Lin 林冠明 |
spellingShingle |
Kuan-ming Lin 林冠明 Reduction Techniques for Training Support Vector Machines |
author_sort |
Kuan-ming Lin |
title |
Reduction Techniques for Training Support Vector Machines |
title_short |
Reduction Techniques for Training Support Vector Machines |
title_full |
Reduction Techniques for Training Support Vector Machines |
title_fullStr |
Reduction Techniques for Training Support Vector Machines |
title_full_unstemmed |
Reduction Techniques for Training Support Vector Machines |
title_sort |
reduction techniques for training support vector machines |
publishDate |
2002 |
url |
http://ndltd.ncl.edu.tw/handle/60505360685138998940 |
work_keys_str_mv |
AT kuanminglin reductiontechniquesfortrainingsupportvectormachines AT línguānmíng reductiontechniquesfortrainingsupportvectormachines AT kuanminglin zhīxiàngjīdejiǎnyuējìshù AT línguānmíng zhīxiàngjīdejiǎnyuējìshù |
_version_ |
1717755460002512896 |