GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm
碩士 === 國立清華大學 === 資訊系統與應用研究所 === 98 === There are more and more data format in the information wor. And there are many algorithms or techniques to present data’s relationship, such as data mining、data analysis. But the data always have high dimension data structure in real world. It’s so difficult i...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2010
|
Online Access: | http://ndltd.ncl.edu.tw/handle/55501608598556211437 |
id |
ndltd-TW-098NTHU5394004 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-098NTHU53940042015-10-13T18:20:42Z http://ndltd.ncl.edu.tw/handle/55501608598556211437 GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm 類LocallyLinearEmbedding資料降維演算法GPU加速框架 Chen, Tseng-Yi 陳增益 碩士 國立清華大學 資訊系統與應用研究所 98 There are more and more data format in the information wor. And there are many algorithms or techniques to present data’s relationship, such as data mining、data analysis. But the data always have high dimension data structure in real world. It’s so difficult implement these data’s relation presentation algorithms or techniques, because the data dimension is multi-dimension. It will take much effort to present data’s relation and let use to realize the graph. So we need a technique to reduce data dimension. There are many data dimension reduction algorithms, such as PCA, MDS, Isomap and LLE…etc. And there are many research papers to discuss the issue which are like “how can we reduce the data dimension accurately” or “how can we modify some algorithms flow to increase the algorithm’s precision”. But there are few papers to talk about these algorithms efficiency or speed up these algorithms. This thesis will increase a dimension reduction algorithm’s computation speed through parallelize that algorithm and different computation platform. First, we chose one dimension reduction algorithm as our improve target. And the algorithm is Locally Linear Embedded(LLE). Because, the data set always form a nonlinear graph in real world. And LLE is a nonlinear dimension reduction algorithm. Second reason, the LLE algorithm have high parallelize computation ability. So, our target is how improve LLE algorithm in parallel computation. GPU computing architecture is so hot in recently. It has powerful float computation ability and high parallel computation capability. So, we use GPU computing architecture to execute our parallel LLE algorithm. We only port KNN search algorithm and large sparse eigen solution (LSES) functions to GPU. Because KNN search algorithm and LSES have heaving calculation loading. We get good performance after we port KNN and LSES. The parallel GPU KNN algorithm speedup 40X~50X performance. And LSES increase 10X performance. Shih, Wei-Kuan 石維寬 2010 學位論文 ; thesis 49 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立清華大學 === 資訊系統與應用研究所 === 98 === There are more and more data format in the information wor. And there are many algorithms or techniques to present data’s relationship, such as data mining、data analysis. But the data always have high dimension data structure in real world. It’s so difficult implement these data’s relation presentation algorithms or techniques, because the data dimension is multi-dimension. It will take much effort to present data’s relation and let use to realize the graph. So we need a technique to reduce data dimension.
There are many data dimension reduction algorithms, such as PCA, MDS, Isomap and LLE…etc. And there are many research papers to discuss the issue which are like “how can we reduce the data dimension accurately” or “how can we modify some algorithms flow to increase the algorithm’s precision”. But there are few papers to talk about these algorithms efficiency or speed up these algorithms. This thesis will increase a dimension reduction algorithm’s computation speed through parallelize that algorithm and different computation platform. First, we chose one dimension reduction algorithm as our improve target. And the algorithm is Locally Linear Embedded(LLE). Because, the data set always form a nonlinear graph in real world. And LLE is a nonlinear dimension reduction algorithm. Second reason, the LLE algorithm have high parallelize computation ability. So, our target is how improve LLE algorithm in parallel computation.
GPU computing architecture is so hot in recently. It has powerful float computation ability and high parallel computation capability. So, we use GPU computing architecture to execute our parallel LLE algorithm. We only port KNN search algorithm and large sparse eigen solution (LSES) functions to GPU. Because KNN search algorithm and LSES have heaving calculation loading.
We get good performance after we port KNN and LSES. The parallel GPU KNN algorithm speedup 40X~50X performance. And LSES increase 10X performance.
|
author2 |
Shih, Wei-Kuan |
author_facet |
Shih, Wei-Kuan Chen, Tseng-Yi 陳增益 |
author |
Chen, Tseng-Yi 陳增益 |
spellingShingle |
Chen, Tseng-Yi 陳增益 GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm |
author_sort |
Chen, Tseng-Yi |
title |
GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm |
title_short |
GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm |
title_full |
GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm |
title_fullStr |
GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm |
title_full_unstemmed |
GPU accelerate framework on variant Locally Linear Embedding dimension reduction algorithm |
title_sort |
gpu accelerate framework on variant locally linear embedding dimension reduction algorithm |
publishDate |
2010 |
url |
http://ndltd.ncl.edu.tw/handle/55501608598556211437 |
work_keys_str_mv |
AT chentsengyi gpuaccelerateframeworkonvariantlocallylinearembeddingdimensionreductionalgorithm AT chénzēngyì gpuaccelerateframeworkonvariantlocallylinearembeddingdimensionreductionalgorithm AT chentsengyi lèilocallylinearembeddingzīliàojiàngwéiyǎnsuànfǎgpujiāsùkuāngjià AT chénzēngyì lèilocallylinearembeddingzīliàojiàngwéiyǎnsuànfǎgpujiāsùkuāngjià |
_version_ |
1718029961513664512 |