Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 105 === Heterogeneous computing achieves high performance by exploiting high parallelism and special type of computation (such as SIMD operations) available in applications on best fit computation devices. For example, massive and regular SIMD operations can be more ef...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2017
|
Online Access: | http://ndltd.ncl.edu.tw/handle/46739294222658497904 |
id |
ndltd-TW-105NTU05392019 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-105NTU053920192017-11-12T04:38:58Z http://ndltd.ncl.edu.tw/handle/46739294222658497904 Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems 在異質架構下深度學習利用多為數組中的稀疏度做動態調整 Kuo-You Peng 彭國祐 碩士 國立臺灣大學 資訊工程學研究所 105 Heterogeneous computing achieves high performance by exploiting high parallelism and special type of computation (such as SIMD operations) available in applications on best fit computation devices. For example, massive and regular SIMD operations can be more efficiently computed on GPU. However, the performance of heterogeneous program can be degraded when the portion assigned to GPU encounters irregular tasks. Deep learning is an application that has the characteristics of high parallelism but may also encounter irregular tasks. This study introduces a method which could reduce computation and improve the performance in the deep learning application by recording information on the runtime. By using collected information, we can adaptive changing workload when we encounter irregular tasks. When deep learning encounters irregular tasks, using our method could split the workload of the deep learning application into two parts: dense workload and sparse workload. The dense workload will be deployed on the GPU device, and the sparse part is sent for the CPU. In this way, GPU gets better computing efficiency, and CPU is more competent in handling sparse part than GPU. Wei-Chung Hsu 徐慰中 2017 學位論文 ; thesis 44 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺灣大學 === 資訊工程學研究所 === 105 === Heterogeneous computing achieves high performance by exploiting high parallelism and special type of computation (such as SIMD operations) available in applications on best fit computation devices. For example, massive and regular SIMD operations can be more efficiently computed on GPU. However, the performance of heterogeneous program can be degraded when the portion assigned to GPU encounters irregular tasks.
Deep learning is an application that has the characteristics of high parallelism but may also encounter irregular tasks. This study introduces a method which could reduce computation and improve the performance in the deep learning application by recording information on the runtime. By using collected information, we can adaptive changing workload when we encounter irregular tasks.
When deep learning encounters irregular tasks, using our method could split the workload of the deep learning application into two parts: dense workload and sparse workload. The dense workload will be deployed on the GPU device, and the sparse part is sent for the CPU. In this way, GPU gets better computing efficiency, and CPU is more competent in handling sparse part than GPU.
|
author2 |
Wei-Chung Hsu |
author_facet |
Wei-Chung Hsu Kuo-You Peng 彭國祐 |
author |
Kuo-You Peng 彭國祐 |
spellingShingle |
Kuo-You Peng 彭國祐 Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems |
author_sort |
Kuo-You Peng |
title |
Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems |
title_short |
Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems |
title_full |
Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems |
title_fullStr |
Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems |
title_full_unstemmed |
Adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems |
title_sort |
adaptive runtime exploiting sparsity in tensor of deep learning on heterogeneous systems |
publishDate |
2017 |
url |
http://ndltd.ncl.edu.tw/handle/46739294222658497904 |
work_keys_str_mv |
AT kuoyoupeng adaptiveruntimeexploitingsparsityintensorofdeeplearningonheterogeneoussystems AT péngguóyòu adaptiveruntimeexploitingsparsityintensorofdeeplearningonheterogeneoussystems AT kuoyoupeng zàiyìzhìjiàgòuxiàshēndùxuéxílìyòngduōwèishùzǔzhōngdexīshūdùzuòdòngtàidiàozhěng AT péngguóyòu zàiyìzhìjiàgòuxiàshēndùxuéxílìyòngduōwèishùzǔzhōngdexīshūdùzuòdòngtàidiàozhěng |
_version_ |
1718561714346131456 |