Analysis and Implementation of Large-scale Linear RankSVM in Distributed Environments

碩士 === 國立臺灣大學 === 資訊工程學研究所 === 104 === Linear rankSVM is a useful method to quickly produce a baseline model for learning to rank. Although its parallelization has been investigated and implemented on GPU, it may not handle large-scale data sets. In this thesis, we propose a distributed trust region...

Full description

Bibliographic Details
Main Authors: Wei-Lun Huang, 黃煒倫
Other Authors: Chih-Jen Lin
Format: Others
Language:en_US
Published: 2016
Online Access:http://ndltd.ncl.edu.tw/handle/22726152369307756919
Description
Summary:碩士 === 國立臺灣大學 === 資訊工程學研究所 === 104 === Linear rankSVM is a useful method to quickly produce a baseline model for learning to rank. Although its parallelization has been investigated and implemented on GPU, it may not handle large-scale data sets. In this thesis, we propose a distributed trust region Newton method for training L2-loss linear rankSVM with two kinds of parallelizations. We carefully discuss the techniques for reducing the communication cost and speeding up the computation, and compare both kinds of parallelizations on dense and sparse data sets. Experiments show that our distributed methods are much faster than the single machine method on two kinds of data sets: one with its number of instances much larger than its number of features, and the other is the opposite.