Scheduling Algorithms of Co-optimizing Thread-Level- Parallelism and Cache Utilization for GPGPUs 研
碩士 === 國立交通大學 === 電子工程學系 電子研究所 === 102 === Thread-Level-Parallelism (TLP) and cache utilization are two significant performance factors of modern throughput processors. The conflicting correlation between the two factors has made the design a non-trivial task. Increasing TLP would aggravate cache co...
Main Authors: | Lu, Chin-Fu, 呂勁甫 |
---|---|
Other Authors: | Jou, Jing-Yang |
Format: | Others |
Language: | en_US |
Published: |
2014
|
Online Access: | http://ndltd.ncl.edu.tw/handle/99321023691038445807 |
Similar Items
-
Addressing software-managed cache development effort in GPGPUs
by: Lashgar, Ahmad
Published: (2017) -
An Architecture-Aware Thread Mapping Methodology for Fuzzy Neural Networks on GPGPUs
by: Tseng, Hao-Yuan, et al.
Published: (2012) -
Memory Contention-Aware Warp Scheduler for GPGPUs
by: Liou Ya-Jie, et al.
Published: (2014) -
A Cache Behavior Aware Multithreading Degree Decision Scheme on GPGPUs
by: Yen, Ta-Kang, et al.
Published: (2014) -
Improving Multi-core Cache Utilization with Data Blocking and Thread Grouping
by: Lu, Wei-I, et al.
Published: (2010)