The Development of High-Utilization Scheduling for Malleable Tasks Using Deep Reinforcement Learning

碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === Modern high-performance computing platforms can perform dynamic tasks using elastic resource provisioning. However, complex schedules lead to useless resource fragments from time to time in the system. This paper employs our previous developed system to colle...

Full description

Bibliographic Details
Main Authors: Chang, Yen-Ling, 張晏菱
Other Authors: Wu, I-Chen
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/pds4at
Description
Summary:碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === Modern high-performance computing platforms can perform dynamic tasks using elastic resource provisioning. However, complex schedules lead to useless resource fragments from time to time in the system. This paper employs our previous developed system to collect resource fragments on the computing farm. The goal is to increasing resource utilization by using these resource fragments to perform lightweight malleable tasks. This paper investigates the efficient approach to assign a set of malleable tasks to a group of resource fragments. We propose a threshold calculation method. The threshold value is used to estimate the success rate for matching a task length to a type of resource fragment. The previous threshold algorithms were calculated using fixed formulas or statistical data and were not able to adapt to changing environments. In this paper, we adopt the PPO reinforcement learning method to train the correlation between system state and threshold values, and obtain better results than that of previous approaches.