A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System
碩士 === 國立交通大學 === 電信工程研究所 === 100 === In this thesis, we using the compute unified device architecture (CUDA) which NVIDIA announced a graphic processor unit (GPU) computing architecture to simulate the feature that synchronous ranked neural network (SRNN) can be updated synchronously. And aim at th...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2012
|
Online Access: | http://ndltd.ncl.edu.tw/handle/75845188881895770189 |
id |
ndltd-TW-100NCTU5435075 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-100NCTU54350752016-03-28T04:20:37Z http://ndltd.ncl.edu.tw/handle/75845188881895770189 A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System 同步分級神經網路在CUDA架構上的平行化研究 Chen, Hsing-Hao 陳星豪 碩士 國立交通大學 電信工程研究所 100 In this thesis, we using the compute unified device architecture (CUDA) which NVIDIA announced a graphic processor unit (GPU) computing architecture to simulate the feature that synchronous ranked neural network (SRNN) can be updated synchronously. And aim at the SRNN operating feature, we discuss the effect of block update manner and neurons’ rank distribution on the amount of computation of SRNN convergence. We hope that we can aim at different SRNN handling problems to find some better setting for minimum computation. In the end, we use the packet scheduling of WDM OPAS-based optical interconnect system (WOPIS) problem as our SRNN model’s handling problem in block update manner. And prove the two effect factors that is as our expectation. In addition, we also want to find the best trade-off between the number of update block and execution parallelism by observing the execution efficiency in different parallelism. Tien, Po-Lung 田伯隆 2012 學位論文 ; thesis 71 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立交通大學 === 電信工程研究所 === 100 === In this thesis, we using the compute unified device architecture (CUDA) which NVIDIA announced a graphic processor unit (GPU) computing architecture to simulate the feature that synchronous ranked neural network (SRNN) can be updated synchronously. And aim at the SRNN operating feature, we discuss the effect of block update manner and neurons’ rank distribution on the amount of computation of SRNN convergence. We hope that we can aim at different SRNN handling problems to find some better setting for minimum computation. In the end, we use the packet scheduling of WDM OPAS-based optical interconnect system (WOPIS) problem as our SRNN model’s handling problem in block update manner. And prove the two effect factors that is as our expectation. In addition, we also want to find the best trade-off between the number of update block and execution parallelism by observing the execution efficiency in different parallelism.
|
author2 |
Tien, Po-Lung |
author_facet |
Tien, Po-Lung Chen, Hsing-Hao 陳星豪 |
author |
Chen, Hsing-Hao 陳星豪 |
spellingShingle |
Chen, Hsing-Hao 陳星豪 A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System |
author_sort |
Chen, Hsing-Hao |
title |
A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System |
title_short |
A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System |
title_full |
A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System |
title_fullStr |
A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System |
title_full_unstemmed |
A Study on the Parallelism of Synchronous Ranked Neural Networks in CUDA System |
title_sort |
study on the parallelism of synchronous ranked neural networks in cuda system |
publishDate |
2012 |
url |
http://ndltd.ncl.edu.tw/handle/75845188881895770189 |
work_keys_str_mv |
AT chenhsinghao astudyontheparallelismofsynchronousrankedneuralnetworksincudasystem AT chénxīngháo astudyontheparallelismofsynchronousrankedneuralnetworksincudasystem AT chenhsinghao tóngbùfēnjíshénjīngwǎnglùzàicudajiàgòushàngdepíngxínghuàyánjiū AT chénxīngháo tóngbùfēnjíshénjīngwǎnglùzàicudajiàgòushàngdepíngxínghuàyánjiū AT chenhsinghao studyontheparallelismofsynchronousrankedneuralnetworksincudasystem AT chénxīngháo studyontheparallelismofsynchronousrankedneuralnetworksincudasystem |
_version_ |
1718213506028797952 |