Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 104 === Linear rankSVM is a useful method to quickly produce a baseline model for learning to rank. Although its parallelization has been investigated and implemented on GPU, it may not handle large-scale data sets. In this thesis, we propose a distributed trust region...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2016
|
Online Access: | http://ndltd.ncl.edu.tw/handle/22726152369307756919 |
id |
ndltd-TW-104NTU05392023 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-104NTU053920232017-06-03T04:41:37Z http://ndltd.ncl.edu.tw/handle/22726152369307756919 Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments 大規模線性排序支持向量機在分散式環境下之分析實作 Wei-Lun Huang 黃煒倫 碩士 國立臺灣大學 資訊工程學研究所 104 Linear rankSVM is a useful method to quickly produce a baseline model for learning to rank. Although its parallelization has been investigated and implemented on GPU, it may not handle large-scale data sets. In this thesis, we propose a distributed trust region Newton method for training L2-loss linear rankSVM with two kinds of parallelizations. We carefully discuss the techniques for reducing the communication cost and speeding up the computation, and compare both kinds of parallelizations on dense and sparse data sets. Experiments show that our distributed methods are much faster than the single machine method on two kinds of data sets: one with its number of instances much larger than its number of features, and the other is the opposite. Chih-Jen Lin 林智仁 2016 學位論文 ; thesis 48 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 104 === Linear rankSVM is a useful method to quickly produce a baseline model for learning to rank. Although its parallelization has been investigated and implemented on GPU, it may not handle large-scale data sets. In this thesis, we propose a distributed trust region Newton method for training L2-loss linear rankSVM with two kinds of parallelizations. We carefully discuss the techniques for reducing the communication cost and speeding up the computation, and compare both kinds of parallelizations on dense and sparse data sets. Experiments show that our distributed methods are much faster than the single machine method on two kinds of data sets: one with its number of instances much larger than its number of features, and the other is the opposite.
|
author2 |
Chih-Jen Lin |
author_facet |
Chih-Jen Lin Wei-Lun Huang 黃煒倫 |
author |
Wei-Lun Huang 黃煒倫 |
spellingShingle |
Wei-Lun Huang 黃煒倫 Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments |
author_sort |
Wei-Lun Huang |
title |
Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments |
title_short |
Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments |
title_full |
Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments |
title_fullStr |
Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments |
title_full_unstemmed |
Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments |
title_sort |
analysis and implementation of large-scale linear ranksvm in distributed environments |
publishDate |
2016 |
url |
http://ndltd.ncl.edu.tw/handle/22726152369307756919 |
work_keys_str_mv |
AT weilunhuang analysisandimplementationoflargescalelinearranksvmindistributedenvironments AT huángwěilún analysisandimplementationoflargescalelinearranksvmindistributedenvironments AT weilunhuang dàguīmóxiànxìngpáixùzhīchíxiàngliàngjīzàifēnsànshìhuánjìngxiàzhīfēnxīshízuò AT huángwěilún dàguīmóxiànxìngpáixùzhīchíxiàngliàngjīzàifēnsànshìhuánjìngxiàzhīfēnxīshízuò |
_version_ |
1718455030669901824 |