THROUGHPUT OPTIMIZATION AND RESOURCE ALLOCATION ON GPUS UNDER MULTI-APPLICATION EXECUTION
Platform heterogeneity prevails as a solution to the throughput and computational chal- lenges imposed by parallel applications and technology scaling. Specifically, Graphics Processing Units (GPUs) are based on the Single Instruction Multiple Thread (SIMT) paradigm and they can offer tremendous spe...
Main Author: | |
---|---|
Format: | Others |
Published: |
OpenSIUC
2017
|
Online Access: | https://opensiuc.lib.siu.edu/theses/2255 https://opensiuc.lib.siu.edu/cgi/viewcontent.cgi?article=3270&context=theses |
Summary: | Platform heterogeneity prevails as a solution to the throughput and computational chal- lenges imposed by parallel applications and technology scaling. Specifically, Graphics Processing Units (GPUs) are based on the Single Instruction Multiple Thread (SIMT) paradigm and they can offer tremendous speed-up for parallel applications. However, GPUs were designed to execute a single application at a time. In case of simultaneous multi-application execution, due to the GPUs’ massive multi-threading paradigm, ap- plications compete against each other using destructively the shared resources (caches and memory controllers) resulting in significant throughput degradation. In this thesis, a methodology for minimizing interference in shared resources and provide efficient con- current execution of multiple applications on GPUs is presented. Particularly, the pro- posed methodology (i) performs application classification; (ii) analyzes the per-class in- terference; (iii) finds the best matching between classes; and (iv) employs an efficient re- source allocation. Experimental results showed that the proposed approach increases the throughput of the system for two concurrent applications by an average of 36% compared to other optimization techniques, while for three concurrent applications the proposed approach achieved an average gain of 23%. |
---|